1.Research advances in abnormal eye movements in multiple system atrophy
Journal of Apoplexy and Nervous Diseases 2025;42(1):30-33
Multiple system atrophy (MSA) is a rare degenerative disease of the nervous system and has diverse and atypical clinical manifestations, and it overlaps with other diseases in α-synuclein spectrum disease. There are great challenges in the diagnosis and early differential diagnosis of the disease, and missed diagnosis and misdiagnosis occur from time to time, thereby delaying the treatment of the disease.Videonystagmography (VNG) is currently the main noninvasive test used to assess vestibular function and can provide different eye movement parameters. Studies have shown the presence of abnormal eye movements in patients with MSA. From the perspective of vision-eye movement, this article reviews the current status of research on eye movements in patients with MSA and reveals the internal connection between them, in order to provide an important reference for the early diagnosis of MSA.
Saccades
2.Dynamic continuous emotion recognition method based on electroencephalography and eye movement signals.
Yangmeng ZOU ; Lilin JIE ; Mingxun WANG ; Yong LIU ; Junhua LI
Journal of Biomedical Engineering 2025;42(1):32-41
Existing emotion recognition research is typically limited to static laboratory settings and has not fully handle the changes in emotional states in dynamic scenarios. To address this problem, this paper proposes a method for dynamic continuous emotion recognition based on electroencephalography (EEG) and eye movement signals. Firstly, an experimental paradigm was designed to cover six dynamic emotion transition scenarios including happy to calm, calm to happy, sad to calm, calm to sad, nervous to calm, and calm to nervous. EEG and eye movement data were collected simultaneously from 20 subjects to fill the gap in current multimodal dynamic continuous emotion datasets. In the valence-arousal two-dimensional space, emotion ratings for stimulus videos were performed every five seconds on a scale of 1 to 9, and dynamic continuous emotion labels were normalized. Subsequently, frequency band features were extracted from the preprocessed EEG and eye movement data. A cascade feature fusion approach was used to effectively combine EEG and eye movement features, generating an information-rich multimodal feature vector. This feature vector was input into four regression models including support vector regression with radial basis function kernel, decision tree, random forest, and K-nearest neighbors, to develop the dynamic continuous emotion recognition model. The results showed that the proposed method achieved the lowest mean square error for valence and arousal across the six dynamic continuous emotions. This approach can accurately recognize various emotion transitions in dynamic situations, offering higher accuracy and robustness compared to using either EEG or eye movement signals alone, making it well-suited for practical applications.
Humans
;
Electroencephalography/methods*
;
Emotions/physiology*
;
Eye Movements/physiology*
;
Signal Processing, Computer-Assisted
;
Support Vector Machine
;
Algorithms
3.A method for emotion transition recognition using cross-modal feature fusion and global perception.
Lilin JIE ; Yangmeng ZOU ; Zhengxiu LI ; Baoliang LYU ; Weilong ZHENG ; Ming LI
Journal of Biomedical Engineering 2025;42(5):977-986
Current studies on electroencephalogram (EEG) emotion recognition primarily concentrate on discrete stimulus paradigms under controlled laboratory settings, which cannot adequately represent the dynamic transition characteristics of emotional states during multi-context interactions. To address this issue, this paper proposes a novel method for emotion transition recognition that leverages a cross-modal feature fusion and global perception network (CFGPN). Firstly, an experimental paradigm encompassing six types of emotion transition scenarios was designed, and EEG and eye movement data were simultaneously collected from 20 participants, each annotated with dynamic continuous emotion labels. Subsequently, deep canonical correlation analysis integrated with a cross-modal attention mechanism was employed to fuse features from EEG and eye movement signals, resulting in multimodal feature vectors enriched with highly discriminative emotional information. These vectors are then input into a parallel hybrid architecture that combines convolutional neural networks (CNNs) and Transformers. The CNN is employed to capture local time-series features, whereas the Transformer leverages its robust global perception capabilities to effectively model long-range temporal dependencies, enabling accurate dynamic emotion transition recognition. The results demonstrate that the proposed method achieves the lowest mean square error in both valence and arousal recognition tasks on the dynamic emotion transition dataset and a classic multimodal emotion dataset. It exhibits superior recognition accuracy and stability when compared with five existing unimodal and six multimodal deep learning models. The approach enhances both adaptability and robustness in recognizing emotional state transitions in real-world scenarios, showing promising potential for applications in the field of biomedical engineering.
Humans
;
Emotions/physiology*
;
Electroencephalography
;
Neural Networks, Computer
;
Eye Movements
;
Perception
4.The Role of Prefrontal and Posterior Parietal Cortex in Generating Multiple Step Saccades.
Wenbo MA ; Zhaohuan DING ; Leixiao FENG ; Xiaoli LI ; Mingsha ZHANG
Neuroscience Bulletin 2025;41(8):1418-1428
While multiple step saccades (MSS) are occasionally reported in the healthy population, they are more evident in patients with Parkinson's disease (PD). Therefore, MSS has been suggested as a biological marker for the diagnosis of PD. However, the lack of clarity on the neural mechanism underlying the generation of MSS largely impedes their application in the clinic. We have proposed recently that MSS are triggered by the discrepancy between desired and executed saccades. Accordingly, brain regions involved in saccadic planning and execution might play a role in the generation of MSS. To test this hypothesis, we explored the role of the prefrontal (PFC) and posterior parietal cortex (PPC) in generating MSS by conducting two experiments: electroencephalographic recording and single-pulse transcranial magnetic stimulation in the PFC or PPC of humans while participants were performing a gap saccade task. We found that the PFC and PPC are involved in the generation of MSS.
Humans
;
Parietal Lobe/physiology*
;
Saccades/physiology*
;
Prefrontal Cortex/physiology*
;
Male
;
Transcranial Magnetic Stimulation
;
Female
;
Electroencephalography
;
Adult
;
Young Adult
5.Transcranial temporal interference stimulation precisely targets deep brain regions to regulate eye movements.
Mo WANG ; Sixian SONG ; Dan LI ; Guangchao ZHAO ; Yu LUO ; Yi TIAN ; Jiajia ZHANG ; Quanying LIU ; Pengfei WEI
Neuroscience Bulletin 2025;41(8):1390-1402
Transcranial temporal interference stimulation (tTIS) is a novel non-invasive neuromodulation technique with the potential to precisely target deep brain structures. This study explores the neural and behavioral effects of tTIS on the superior colliculus (SC), a region involved in eye movement control, in mice. Computational modeling revealed that tTIS delivers more focused stimulation to the SC than traditional transcranial alternating current stimulation. In vivo experiments, including Ca2+ signal recordings and eye movement tracking, showed that tTIS effectively modulates SC neural activity and induces eye movements. A significant correlation was found between stimulation frequency and saccade frequency, suggesting direct tTIS-induced modulation of SC activity. These results demonstrate the precision of tTIS in targeting deep brain regions and regulating eye movements, highlighting its potential for neuroscientific research and therapeutic applications.
Animals
;
Superior Colliculi/physiology*
;
Transcranial Direct Current Stimulation/methods*
;
Eye Movements/physiology*
;
Male
;
Mice
;
Mice, Inbred C57BL
6.Distributions of Visual Receptive Fields from Retinotopic to Craniotopic Coordinates in the Lateral Intraparietal Area and Frontal Eye Fields of the Macaque.
Lin YANG ; Min JIN ; Cong ZHANG ; Ning QIAN ; Mingsha ZHANG
Neuroscience Bulletin 2024;40(2):171-181
Even though retinal images of objects change their locations following each eye movement, we perceive a stable and continuous world. One possible mechanism by which the brain achieves such visual stability is to construct a craniotopic coordinate by integrating retinal and extraretinal information. There have been several proposals on how this may be done, including eye-position modulation (gain fields) of retinotopic receptive fields (RFs) and craniotopic RFs. In the present study, we investigated coordinate systems used by RFs in the lateral intraparietal (LIP) cortex and frontal eye fields (FEF) and compared the two areas. We mapped the two-dimensional RFs of neurons in detail under two eye fixations and analyzed how the RF of a given neuron changes with eye position to determine its coordinate representation. The same recording and analysis procedures were applied to the two brain areas. We found that, in both areas, RFs were distributed from retinotopic to craniotopic representations. There was no significant difference between the distributions in the LIP and FEF. Only a small fraction of neurons was fully craniotopic, whereas most neurons were between the retinotopic and craniotopic representations. The distributions were strongly biased toward the retinotopic side but with significant craniotopic shifts. These results suggest that there is only weak evidence for craniotopic RFs in the LIP and FEF, and that transformation from retinotopic to craniotopic coordinates in these areas must rely on other factors such as gain fields.
Animals
;
Macaca
;
Visual Fields
;
Frontal Lobe/physiology*
;
Eye Movements
;
Brain
7.Effect and differentiation of spontaneous nystagmus of acute unilateral vestibulopathy on saccade in the video head impulse test.
Qiaomei DENG ; Xueqing ZHANG ; Chao WEN ; Xiaobang HUANG ; Taisheng CHEN ; Wei WANG
Journal of Clinical Otorhinolaryngology Head and Neck Surgery 2024;38(12):1122-1133
Objective:Exploring the performance characteristics of spontaneous nystagmus(SN) in video-head impulse test(vHIT) and its possible effects on saccade. Methods:Vestibular function tests such as vHIT and SN were conducted in 48 patients with acute unilateral vestibulopathy(AUVP). The saccade characteristics of vHIT in patients without SN and those with SN were analyzed, as well as the expression characteristics of SN in vHIT. Results:Among the 48 AUVP patients, there were 34 cases with SN, including 31 cases with saccade on the healthy side, 11 cases with both the same and opposite directions of eye movement, 19 with the opposite only, 1 with same direction only, and 3 cases without saccade. There were 14 patients without SN, of whom 10 showed saccade on the healthy side, including 4 with both eye movements in the same and opposite direction, 2 in the opposite direction only, 4 in the same direction only, and 4 without saccade. There is a correlation between reverse saccade on the healthy side and the presence of SN in patients. SN in vHIT can appear opposite to the direction of eye movement on the healthy side, while on the affected side it can appear the same as the direction of eye movement and may cause more discrete overt saccade. 32 patients in the acute phase(≤2 w), 29 patients with SN, SN intensity of(6.7 ± 3.2) °/s, and 3 patients without SN. 16 cases in non acute phase(>2 w), 5 cases with SN, SN intensity of(3.7 ± 2.1) °/s, and 11 cases without SN. In the acute phase there were 30 cases of saccade on the healthy side, 10 cases with both the same and opposite direction of eye movement, 18 cases with only the opposite direction, 2 cases with only the same direction and 2 cases without saccade. There is a correlation between the duration of the disease and the occurrence of reverse saccade on the healthy side. The intensity cut off point of SN for reverse saccade is 2.1 °/s in the healthy lateral semicircular canal vHIT. Conclusion:Compensatory saccades and SN waves with similar waveforms are mostly present in vHIT in AUVP patients. SN wave is in the opposite direction of the normal side and eye movement wave, and the affected side and dominant saccade direction are in the same direction and mixed together, which can affect the dispersion and amplitude of overt saccade in vHIT. Accurate identification of SN in vHIT of AUVP patients is not only the key factor to identify compensatory saccade, but also can provide help for the diagnosis and compensatory assessment of AUVP.
Humans
;
Head Impulse Test/methods*
;
Nystagmus, Pathologic/physiopathology*
;
Saccades/physiology*
;
Male
;
Female
;
Vestibular Diseases/physiopathology*
;
Middle Aged
;
Adult
;
Eye Movements/physiology*
;
Aged
8.Neuronal Response to Reward and Luminance in Macaque LIP During Saccadic Choice.
Ziqi WU ; Aihua CHEN ; Xinying CAI
Neuroscience Bulletin 2023;39(1):14-28
Recent work in decision neuroscience suggests that visual saliency can interact with reward-based choice, and the lateral intraparietal cortex (LIP) is implicated in this process. In this study, we recorded from LIP neurons while monkeys performed a two alternative choice task in which the reward and luminance associated with each offer were varied independently. We discovered that the animal's choice was dictated by the reward amount while the luminance had a marginal effect. In the LIP, neuronal activity corresponded well with the animal's choice pattern, in that a majority of reward-modulated neurons encoded the reward amount in the neuron's preferred hemifield with a positive slope. In contrast, compared to their responses to low luminance, an approximately equal proportion of luminance-sensitive neurons responded to high luminance with increased or decreased activity, leading to a much weaker population-level response. Meanwhile, in the non-preferred hemifield, the strength of encoding for reward amount and luminance was positively correlated, suggesting the integration of these two factors in the LIP. Moreover, neurons encoding reward and luminance were homogeneously distributed along the anterior-posterior axis of the LIP. Overall, our study provides further evidence supporting the neural instantiation of a priority map in the LIP in reward-based decisions.
Animals
;
Macaca mulatta/physiology*
;
Parietal Lobe
;
Neurons/physiology*
;
Saccades
;
Reward
;
Photic Stimulation
9.Form Properties of Moving Targets Bias Smooth Pursuit Target Selection in Monkeys.
Huixi DOU ; Huan WANG ; Sainan LIU ; Jun HUANG ; Zuxiang LIU ; Tiangang ZHOU ; Yan YANG
Neuroscience Bulletin 2023;39(8):1246-1262
During natural viewing, we often recognize multiple objects, detect their motion, and select one object as the target to track. It remains to be determined how such behavior is guided by the integration of visual form and motion perception. To address this, we studied how monkeys made a choice to track moving targets with different forms by smooth pursuit eye movements in a two-target task. We found that pursuit responses were biased toward the motion direction of a target with a hole. By computing the relative weighting, we found that the target with a hole exhibited a larger weight for vector computation. The global hole feature dominated other form properties. This dominance failed to account for changes in pursuit responses to a target with different forms moving singly. These findings suggest that the integration of visual form and motion perception can reshape the competition in sensorimotor networks to guide behavioral selection.
Animals
;
Pursuit, Smooth
;
Macaca mulatta
;
Motion Perception/physiology*
;
Photic Stimulation
10.Research on eye movement data classification using support vector machine with improved whale optimization algorithm.
Yinhong SHEN ; Chang ZHANG ; Lin YANG ; Yuanyuan LI ; Xiujuan ZHENG
Journal of Biomedical Engineering 2023;40(2):335-342
When performing eye movement pattern classification for different tasks, support vector machines are greatly affected by parameters. To address this problem, we propose an algorithm based on the improved whale algorithm to optimize support vector machines to enhance the performance of eye movement data classification. According to the characteristics of eye movement data, this study first extracts 57 features related to fixation and saccade, then uses the ReliefF algorithm for feature selection. To address the problems of low convergence accuracy and easy falling into local minima of the whale algorithm, we introduce inertia weights to balance local search and global search to accelerate the convergence speed of the algorithm and also use the differential variation strategy to increase individual diversity to jump out of local optimum. In this paper, experiments are conducted on eight test functions, and the results show that the improved whale algorithm has the best convergence accuracy and convergence speed. Finally, this paper applies the optimized support vector machine model of the improved whale algorithm to the task of classifying eye movement data in autism, and the experimental results on the public dataset show that the accuracy of the eye movement data classification of this paper is greatly improved compared with that of the traditional support vector machine method. Compared with the standard whale algorithm and other optimization algorithms, the optimized model proposed in this paper has higher recognition accuracy and provides a new idea and method for eye movement pattern recognition. In the future, eye movement data can be obtained by combining it with eye trackers to assist in medical diagnosis.
Animals
;
Support Vector Machine
;
Whales
;
Eye Movements
;
Algorithms

Result Analysis
Print
Save
E-mail