1.Research advances in abnormal eye movements in multiple system atrophy
Journal of Apoplexy and Nervous Diseases 2025;42(1):30-33
Multiple system atrophy (MSA) is a rare degenerative disease of the nervous system and has diverse and atypical clinical manifestations, and it overlaps with other diseases in α-synuclein spectrum disease. There are great challenges in the diagnosis and early differential diagnosis of the disease, and missed diagnosis and misdiagnosis occur from time to time, thereby delaying the treatment of the disease.Videonystagmography (VNG) is currently the main noninvasive test used to assess vestibular function and can provide different eye movement parameters. Studies have shown the presence of abnormal eye movements in patients with MSA. From the perspective of vision-eye movement, this article reviews the current status of research on eye movements in patients with MSA and reveals the internal connection between them, in order to provide an important reference for the early diagnosis of MSA.
Saccades
2.Clinical profile and outcomes of central microbial keratitis in the Philippines
Ma. Dominga B. padilla ; Ruben Lim Bon siong ; George Michael N. Sosuan
Philippine Journal of Ophthalmology 2025;50(1):26-32
OBJECTIVE
Despite being a preventable and treatable condition, central microbial keratitis (CMK) and its complications remain to be a significant cause of vision loss in our country. This study presents the demographic profile, risk factors, etiologies, treatments, and outcomes of CMK in the Philippines.
METHODSThe study was a two-center, prospective, non-randomized clinical study involving the patients of the External Disease and Cornea Clinics of two tertiary eye referral centers in the Philippines. It was conducted as the Philippine leg of the Asia Cornea Society Infectious Keratitis Study (ASCIKS).1 Patients with a clinical diagnosis of CMK rendered by a cornea specialist, and who signed the consent form, were recruited into the study. They underwent uniform sample collection and culture techniques as described in the ACSIKS. All patients were followed-up for 6 months. Data collected included demographics, risk factors, culture results, management, and treatment outcomes. Descriptive statistics and frequency were used to analyze the data.
RESULTSA total of 348 patients diagnosed with CMK were included. Trauma (65.5%) among the middle-aged (42.9 ± 17.9 years) male population was the most significant risk factor for development of CMK, followed by contact lens wear (12.9%), prior ocular surgery (6.0%), and ocular surface diseases (3.4%). Bacterial keratitis (53.2%) was still the most common etiology of CMK, followed by fungal keratitis (27.0%), Acanthamoeba keratitis (5.7%), and viral keratitis (2.0%). Aspergillus species (18.3%) were the most common microbial isolates. Pseudomonas species (13.9%) were the most common bacterial isolates. The median time from onset of symptoms to consultation with the study centers was 2 weeks. Medical treatment was enough to treat the infection in 34.8% of cases. Surgical intervention was necessitated in 22.6% with evisceration/enucleation done in 1 out of 3 patients who had surgery.
CONCLUSIONBacterial infection remains the most common cause of CMK in the Philippines, followed by fungal infection. Significant risk factors include trauma and contact lens wear. Aspergillus species and Pseudomonas species were the most common fungal and bacterial isolates, respectively. Despite medical treatment, almost a quarter of the cases still required surgical intervention.
Human ; Fungi ; Bacteria ; Philippines ; Vision, Ocular ; Keratitis
3.Dynamic continuous emotion recognition method based on electroencephalography and eye movement signals.
Yangmeng ZOU ; Lilin JIE ; Mingxun WANG ; Yong LIU ; Junhua LI
Journal of Biomedical Engineering 2025;42(1):32-41
Existing emotion recognition research is typically limited to static laboratory settings and has not fully handle the changes in emotional states in dynamic scenarios. To address this problem, this paper proposes a method for dynamic continuous emotion recognition based on electroencephalography (EEG) and eye movement signals. Firstly, an experimental paradigm was designed to cover six dynamic emotion transition scenarios including happy to calm, calm to happy, sad to calm, calm to sad, nervous to calm, and calm to nervous. EEG and eye movement data were collected simultaneously from 20 subjects to fill the gap in current multimodal dynamic continuous emotion datasets. In the valence-arousal two-dimensional space, emotion ratings for stimulus videos were performed every five seconds on a scale of 1 to 9, and dynamic continuous emotion labels were normalized. Subsequently, frequency band features were extracted from the preprocessed EEG and eye movement data. A cascade feature fusion approach was used to effectively combine EEG and eye movement features, generating an information-rich multimodal feature vector. This feature vector was input into four regression models including support vector regression with radial basis function kernel, decision tree, random forest, and K-nearest neighbors, to develop the dynamic continuous emotion recognition model. The results showed that the proposed method achieved the lowest mean square error for valence and arousal across the six dynamic continuous emotions. This approach can accurately recognize various emotion transitions in dynamic situations, offering higher accuracy and robustness compared to using either EEG or eye movement signals alone, making it well-suited for practical applications.
Humans
;
Electroencephalography/methods*
;
Emotions/physiology*
;
Eye Movements/physiology*
;
Signal Processing, Computer-Assisted
;
Support Vector Machine
;
Algorithms
4.A portable steady-state visual evoked potential brain-computer interface system for smart healthcare.
Yisen ZHU ; Zhouyu JI ; Shuran LI ; Haicheng WANG ; Yunfa FU ; Hongtao WANG
Journal of Biomedical Engineering 2025;42(3):455-463
This paper realized a portable brain-computer interface (BCI) system tailored for smart healthcare. Through the decoding of steady-state visual evoked potential (SSVEP), this system can rapidly and accurately identify the intentions of subjects, thereby meeting the practical demands of daily medical scenarios. Firstly, an SSVEP stimulation interface and an electroencephalogram (EEG) signal acquisition software were designed, which enable the system to execute multi-target and multi-task operations while also incorporating data visualization functionality. Secondly, the EEG signals recorded from the occipital region were decomposed into eight sub-frequency bands using filter bank canonical correlation analysis (FBCCA). Subsequently, the similarity between each sub-band signal and the reference signals was computed to achieve efficient SSVEP decoding. Finally, 15 subjects were recruited to participate in the online evaluation of the system. The experimental results indicated that in real-world scenarios, the system achieved an average accuracy of 85.19% in identifying the intentions of the subjects, and an information transfer rate (ITR) of 37.52 bit/min. This system was awarded third prize in the Visual BCI Innovation Application Development competition at the 2024 World Robot Contest, validating its effectiveness. In conclusion, this study has developed a portable, multifunctional SSVEP online decoding system, providing an effective approach for human-computer interaction in smart healthcare.
Brain-Computer Interfaces
;
Humans
;
Evoked Potentials, Visual/physiology*
;
Electroencephalography
;
Signal Processing, Computer-Assisted
;
Software
;
Adult
;
Male
5.Performance evaluation of a wearable steady-state visual evoked potential based brain-computer interface in real-life scenario.
Xiaodong LI ; Xiang CAO ; Junlin WANG ; Weijie ZHU ; Yong HUANG ; Feng WAN ; Yong HU
Journal of Biomedical Engineering 2025;42(3):464-472
Brain-computer interface (BCI) has high application value in the field of healthcare. However, in practical clinical applications, convenience and system performance should be considered in the use of BCI. Wearable BCIs are generally with high convenience, but their performance in real-life scenario needs to be evaluated. This study proposed a wearable steady-state visual evoked potential (SSVEP)-based BCI system equipped with a small-sized electroencephalogram (EEG) collector and a high-performance training-free decoding algorithm. Ten healthy subjects participated in the test of BCI system under simplified experimental preparation. The results showed that the average classification accuracy of this BCI was 94.10% for 40 targets, and there was no significant difference compared to the dataset collected under the laboratory condition. The system achieved a maximum information transfer rate (ITR) of 115.25 bit/min with 8-channel signal and 98.49 bit/min with 4-channel signal, indicating that the 4-channel solution can be used as an option for the few-channel BCI. Overall, this wearable SSVEP-BCI can achieve good performance in real-life scenario, which helps to promote BCI technology in clinical practice.
Brain-Computer Interfaces
;
Humans
;
Evoked Potentials, Visual/physiology*
;
Electroencephalography
;
Wearable Electronic Devices
;
Algorithms
;
Signal Processing, Computer-Assisted
;
Adult
;
Male
6.Technical maturity and bubble risks of brain-computer interface (BCI): Considerations from research to industrial translation.
Journal of Biomedical Engineering 2025;42(4):651-659
Brain-computer interface (BCI) technology faces structural risks due to a misalignment between its technological maturity and industrialization expectations. This study used the Technology Readiness Level (TRL) framework to assess the status of major BCI paradigms-such as steady-state visual evoked potential (SSVEP), motor imagery, and P300-and found that they predominantly remained at TRL4 to TRL6, with few stable applications reaching TRL9. The analysis identified four interrelated sources of bubble risk: overly broad definitions of BCI, excessive focus on decoding performance, asynchronous translational progress, and imprecise terminology usage. These distortions have contributed to the misallocation of research resources and public misunderstanding. To foster the sustainable development of BCI, this paper advocated the establishment of a standardized TRL evaluation system, clearer terminological boundaries, stronger support for fundamental research, enhanced ethical oversight, and the implementation of inclusive and diversified governance mechanisms.
Brain-Computer Interfaces
;
Humans
;
Evoked Potentials, Visual
;
Electroencephalography
;
Event-Related Potentials, P300
7.Research on hybrid brain-computer interface based on imperceptible visual and auditory stimulation responses.
Zexin PANG ; Yijun WANG ; Qingpeng DONG ; Zijian CHENG ; Zhaohui LI ; Ruoqing ZHANG ; Hongyan CUI ; Xiaogang CHEN
Journal of Biomedical Engineering 2025;42(4):660-667
In recent years, hybrid brain-computer interfaces (BCIs) have gained significant attention due to their demonstrated advantages in increasing the number of targets and enhancing robustness of the systems. However, Existing studies usually construct BCI systems using intense auditory stimulation and strong central visual stimulation, which lead to poor user experience and indicate a need for improving system comfort. Studies have proved that the use of peripheral visual stimulation and lower intensity of auditory stimulation can effectively boost the user's comfort. Therefore, this study used high-frequency peripheral visual stimulation and 40-dB weak auditory stimulation to elicit steady-state visual evoked potential (SSVEP) and auditory steady-state response (ASSR) signals, building a high-comfort hybrid BCI based on weak audio-visual evoked responses. This system coded 40 targets via 20 high-frequency visual stimulation frequencies and two auditory stimulation frequencies, improving the coding efficiency of BCI systems. Results showed that the hybrid system's averaged classification accuracy was (78.00 ± 12.18) %, and the information transfer rate (ITR) could reached 27.47 bits/min. This study offers new ideas for the design of hybrid BCI paradigm based on imperceptible stimulation.
Brain-Computer Interfaces
;
Humans
;
Evoked Potentials, Visual/physiology*
;
Acoustic Stimulation
;
Photic Stimulation
;
Electroencephalography
;
Evoked Potentials, Auditory/physiology*
;
Adult
8.A method for emotion transition recognition using cross-modal feature fusion and global perception.
Lilin JIE ; Yangmeng ZOU ; Zhengxiu LI ; Baoliang LYU ; Weilong ZHENG ; Ming LI
Journal of Biomedical Engineering 2025;42(5):977-986
Current studies on electroencephalogram (EEG) emotion recognition primarily concentrate on discrete stimulus paradigms under controlled laboratory settings, which cannot adequately represent the dynamic transition characteristics of emotional states during multi-context interactions. To address this issue, this paper proposes a novel method for emotion transition recognition that leverages a cross-modal feature fusion and global perception network (CFGPN). Firstly, an experimental paradigm encompassing six types of emotion transition scenarios was designed, and EEG and eye movement data were simultaneously collected from 20 participants, each annotated with dynamic continuous emotion labels. Subsequently, deep canonical correlation analysis integrated with a cross-modal attention mechanism was employed to fuse features from EEG and eye movement signals, resulting in multimodal feature vectors enriched with highly discriminative emotional information. These vectors are then input into a parallel hybrid architecture that combines convolutional neural networks (CNNs) and Transformers. The CNN is employed to capture local time-series features, whereas the Transformer leverages its robust global perception capabilities to effectively model long-range temporal dependencies, enabling accurate dynamic emotion transition recognition. The results demonstrate that the proposed method achieves the lowest mean square error in both valence and arousal recognition tasks on the dynamic emotion transition dataset and a classic multimodal emotion dataset. It exhibits superior recognition accuracy and stability when compared with five existing unimodal and six multimodal deep learning models. The approach enhances both adaptability and robustness in recognizing emotional state transitions in real-world scenarios, showing promising potential for applications in the field of biomedical engineering.
Humans
;
Emotions/physiology*
;
Electroencephalography
;
Neural Networks, Computer
;
Eye Movements
;
Perception
9.The Role of Prefrontal and Posterior Parietal Cortex in Generating Multiple Step Saccades.
Wenbo MA ; Zhaohuan DING ; Leixiao FENG ; Xiaoli LI ; Mingsha ZHANG
Neuroscience Bulletin 2025;41(8):1418-1428
While multiple step saccades (MSS) are occasionally reported in the healthy population, they are more evident in patients with Parkinson's disease (PD). Therefore, MSS has been suggested as a biological marker for the diagnosis of PD. However, the lack of clarity on the neural mechanism underlying the generation of MSS largely impedes their application in the clinic. We have proposed recently that MSS are triggered by the discrepancy between desired and executed saccades. Accordingly, brain regions involved in saccadic planning and execution might play a role in the generation of MSS. To test this hypothesis, we explored the role of the prefrontal (PFC) and posterior parietal cortex (PPC) in generating MSS by conducting two experiments: electroencephalographic recording and single-pulse transcranial magnetic stimulation in the PFC or PPC of humans while participants were performing a gap saccade task. We found that the PFC and PPC are involved in the generation of MSS.
Humans
;
Parietal Lobe/physiology*
;
Saccades/physiology*
;
Prefrontal Cortex/physiology*
;
Male
;
Transcranial Magnetic Stimulation
;
Female
;
Electroencephalography
;
Adult
;
Young Adult
10.Transcranial temporal interference stimulation precisely targets deep brain regions to regulate eye movements.
Mo WANG ; Sixian SONG ; Dan LI ; Guangchao ZHAO ; Yu LUO ; Yi TIAN ; Jiajia ZHANG ; Quanying LIU ; Pengfei WEI
Neuroscience Bulletin 2025;41(8):1390-1402
Transcranial temporal interference stimulation (tTIS) is a novel non-invasive neuromodulation technique with the potential to precisely target deep brain structures. This study explores the neural and behavioral effects of tTIS on the superior colliculus (SC), a region involved in eye movement control, in mice. Computational modeling revealed that tTIS delivers more focused stimulation to the SC than traditional transcranial alternating current stimulation. In vivo experiments, including Ca2+ signal recordings and eye movement tracking, showed that tTIS effectively modulates SC neural activity and induces eye movements. A significant correlation was found between stimulation frequency and saccade frequency, suggesting direct tTIS-induced modulation of SC activity. These results demonstrate the precision of tTIS in targeting deep brain regions and regulating eye movements, highlighting its potential for neuroscientific research and therapeutic applications.
Animals
;
Superior Colliculi/physiology*
;
Transcranial Direct Current Stimulation/methods*
;
Eye Movements/physiology*
;
Male
;
Mice
;
Mice, Inbred C57BL


Result Analysis
Print
Save
E-mail