1.Research on emotion recognition methods based on multi-modal physiological signal feature fusion.
Zhiwen ZHANG ; Naigong YU ; Yan BIAN ; Jinhan YAN
Journal of Biomedical Engineering 2025;42(1):17-23
Emotion classification and recognition is a crucial area in emotional computing. Physiological signals, such as electroencephalogram (EEG), provide an accurate reflection of emotions and are difficult to disguise. However, emotion recognition still faces challenges in single-modal signal feature extraction and multi-modal signal integration. This study collected EEG, electromyogram (EMG), and electrodermal activity (EDA) signals from participants under three emotional states: happiness, sadness, and fear. A feature-weighted fusion method was applied for integrating the signals, and both support vector machine (SVM) and extreme learning machine (ELM) were used for classification. The results showed that the classification accuracy was highest when the fusion weights were set to EEG 0.7, EMG 0.15, and EDA 0.15, achieving accuracy rates of 80.19% and 82.48% for SVM and ELM, respectively. These rates represented an improvement of 5.81% and 2.95% compared to using EEG alone. This study offers methodological support for emotion classification and recognition using multi-modal physiological signals.
Humans
;
Emotions/physiology*
;
Electroencephalography
;
Support Vector Machine
;
Electromyography
;
Signal Processing, Computer-Assisted
;
Galvanic Skin Response/physiology*
;
Machine Learning
;
Male
2.Audiovisual emotion recognition based on a multi-head cross attention mechanism.
Ziqiong WANG ; Dechun ZHAO ; Lu QIN ; Yi CHEN ; Yuchen SHEN
Journal of Biomedical Engineering 2025;42(1):24-31
In audiovisual emotion recognition, representational learning is a research direction receiving considerable attention, and the key lies in constructing effective affective representations with both consistency and variability. However, there are still many challenges to accurately realize affective representations. For this reason, in this paper we proposed a cross-modal audiovisual recognition model based on a multi-head cross-attention mechanism. The model achieved fused feature and modality alignment through a multi-head cross-attention architecture, and adopted a segmented training strategy to cope with the modality missing problem. In addition, a unimodal auxiliary loss task was designed and shared parameters were used in order to preserve the independent information of each modality. Ultimately, the model achieved macro and micro F1 scores of 84.5% and 88.2%, respectively, on the crowdsourced annotated multimodal emotion dataset of actor performances (CREMA-D). The model in this paper can effectively capture intra- and inter-modal feature representations of audio and video modalities, and successfully solves the unity problem of the unimodal and multimodal emotion recognition frameworks, which provides a brand-new solution to the audiovisual emotion recognition.
Emotions
;
Humans
;
Attention
;
Algorithms
3.Dynamic continuous emotion recognition method based on electroencephalography and eye movement signals.
Yangmeng ZOU ; Lilin JIE ; Mingxun WANG ; Yong LIU ; Junhua LI
Journal of Biomedical Engineering 2025;42(1):32-41
Existing emotion recognition research is typically limited to static laboratory settings and has not fully handle the changes in emotional states in dynamic scenarios. To address this problem, this paper proposes a method for dynamic continuous emotion recognition based on electroencephalography (EEG) and eye movement signals. Firstly, an experimental paradigm was designed to cover six dynamic emotion transition scenarios including happy to calm, calm to happy, sad to calm, calm to sad, nervous to calm, and calm to nervous. EEG and eye movement data were collected simultaneously from 20 subjects to fill the gap in current multimodal dynamic continuous emotion datasets. In the valence-arousal two-dimensional space, emotion ratings for stimulus videos were performed every five seconds on a scale of 1 to 9, and dynamic continuous emotion labels were normalized. Subsequently, frequency band features were extracted from the preprocessed EEG and eye movement data. A cascade feature fusion approach was used to effectively combine EEG and eye movement features, generating an information-rich multimodal feature vector. This feature vector was input into four regression models including support vector regression with radial basis function kernel, decision tree, random forest, and K-nearest neighbors, to develop the dynamic continuous emotion recognition model. The results showed that the proposed method achieved the lowest mean square error for valence and arousal across the six dynamic continuous emotions. This approach can accurately recognize various emotion transitions in dynamic situations, offering higher accuracy and robustness compared to using either EEG or eye movement signals alone, making it well-suited for practical applications.
Humans
;
Electroencephalography/methods*
;
Emotions/physiology*
;
Eye Movements/physiology*
;
Signal Processing, Computer-Assisted
;
Support Vector Machine
;
Algorithms
4.A study on electroencephalogram characteristics of depression in patients with aphasia based on resting state and emotional Stroop task.
Siyuan DING ; Yan ZHU ; Chang SHI ; Banghua YANG
Journal of Biomedical Engineering 2025;42(3):488-495
Post-stroke aphasia is associated with a significantly elevated risk of depression, yet the underlying mechanisms remain unclear. This study recorded 64-channel electroencephalogram data and depression scale scores from 12 aphasic patients with depression, 8 aphasic patients without depression, and 12 healthy controls during resting state and an emotional Stroop task. Spectral and microstate analyses were conducted to examine brain activity patterns across conditions. Results showed that depression scores significantly negatively explained the occurrence of microstate class C and positively explained the transition probability from microstate class A to B. Furthermore, aphasic patients with depression exhibited increased alpha-band activation in the frontal region. These findings suggest distinct neural features in aphasic patients with depression and offer new insights into the mechanisms contributing to their heightened vulnerability to depression.
Humans
;
Electroencephalography
;
Aphasia/etiology*
;
Stroop Test
;
Emotions/physiology*
;
Depression/etiology*
;
Male
;
Female
;
Middle Aged
;
Stroke/complications*
;
Brain/physiopathology*
;
Aged
;
Adult
;
Rest/physiology*
5.Research on bimodal emotion recognition algorithm based on multi-branch bidirectional multi-scale time perception.
Peiyun XUE ; Sibin WANG ; Jing BAI ; Yan QIANG
Journal of Biomedical Engineering 2025;42(3):528-536
Emotion can reflect the psychological and physiological health of human beings, and the main expression of human emotion is voice and facial expression. How to extract and effectively integrate the two modes of emotion information is one of the main challenges faced by emotion recognition. In this paper, a multi-branch bidirectional multi-scale time perception model is proposed, which can detect the forward and reverse speech Mel-frequency spectrum coefficients in the time dimension. At the same time, the model uses causal convolution to obtain temporal correlation information between different scale features, and assigns attention maps to them according to the information, so as to obtain multi-scale fusion of speech emotion features. Secondly, this paper proposes a two-modal feature dynamic fusion algorithm, which combines the advantages of AlexNet and uses overlapping maximum pooling layers to obtain richer fusion features from different modal feature mosaic matrices. Experimental results show that the accuracy of the multi-branch bidirectional multi-scale time sensing dual-modal emotion recognition model proposed in this paper reaches 97.67% and 90.14% respectively on the two public audio and video emotion data sets, which is superior to other common methods, indicating that the proposed emotion recognition model can effectively capture emotion feature information and improve the accuracy of emotion recognition.
Humans
;
Emotions
;
Algorithms
;
Facial Expression
;
Time Perception
;
Neural Networks, Computer
;
Speech
6.A method for emotion transition recognition using cross-modal feature fusion and global perception.
Lilin JIE ; Yangmeng ZOU ; Zhengxiu LI ; Baoliang LYU ; Weilong ZHENG ; Ming LI
Journal of Biomedical Engineering 2025;42(5):977-986
Current studies on electroencephalogram (EEG) emotion recognition primarily concentrate on discrete stimulus paradigms under controlled laboratory settings, which cannot adequately represent the dynamic transition characteristics of emotional states during multi-context interactions. To address this issue, this paper proposes a novel method for emotion transition recognition that leverages a cross-modal feature fusion and global perception network (CFGPN). Firstly, an experimental paradigm encompassing six types of emotion transition scenarios was designed, and EEG and eye movement data were simultaneously collected from 20 participants, each annotated with dynamic continuous emotion labels. Subsequently, deep canonical correlation analysis integrated with a cross-modal attention mechanism was employed to fuse features from EEG and eye movement signals, resulting in multimodal feature vectors enriched with highly discriminative emotional information. These vectors are then input into a parallel hybrid architecture that combines convolutional neural networks (CNNs) and Transformers. The CNN is employed to capture local time-series features, whereas the Transformer leverages its robust global perception capabilities to effectively model long-range temporal dependencies, enabling accurate dynamic emotion transition recognition. The results demonstrate that the proposed method achieves the lowest mean square error in both valence and arousal recognition tasks on the dynamic emotion transition dataset and a classic multimodal emotion dataset. It exhibits superior recognition accuracy and stability when compared with five existing unimodal and six multimodal deep learning models. The approach enhances both adaptability and robustness in recognizing emotional state transitions in real-world scenarios, showing promising potential for applications in the field of biomedical engineering.
Humans
;
Emotions/physiology*
;
Electroencephalography
;
Neural Networks, Computer
;
Eye Movements
;
Perception
7.Chain mediating role of family care and emotional management between social support and anxiety in primary school students.
Zhan-Wen LI ; Jian-Hui WEI ; Ke-Bin CHEN ; Xiao-Rui RUAN ; Yu-Ting WEN ; Cheng-Lu ZHOU ; Jia-Peng TANG ; Ting-Ting WANG ; Ya-Qing TAN ; Jia-Bi QIN
Chinese Journal of Contemporary Pediatrics 2025;27(10):1176-1184
OBJECTIVES:
To investigate the chain mediating role of family care and emotional management in the relationship between social support and anxiety among rural primary school students.
METHODS:
A questionnaire survey was conducted among students in grades 4 to 6 from four counties in Hunan Province. Data were collected using the Social Support Rating Scale, Family Care Index Scale, Emotional Intelligence Scale, and Generalized Anxiety Disorder -7. Logistic regression analysis was used to explore the influencing factors of anxiety symptoms. Mediation analysis was conducted to assess the chain mediating effects of family care and emotional management between social support and anxiety.
RESULTS:
A total of 4 141 questionnaires were distributed, with 3 874 valid responses (effective response rate: 93.55%). The prevalence rate of anxiety symptoms among these students was 9.32% (95%CI: 8.40%-10.23%). Significant differences were observed in the prevalence rates of anxiety symptoms among groups with different levels of social support, family functioning, and emotional management ability (P<0.05). The total indirect effect of social support on anxiety symptoms via family care and emotional management was significant (β=-0.137, 95%CI: -0.167 to -0.109), and the direct effect of social support on anxiety symptoms remained significant (P<0.05). Family care and emotional management served as significant chain mediators in the relationship between social support and anxiety symptoms (β=-0.025,95%CI:-0.032 to -0.018), accounting for 14.5% of the total effect.
CONCLUSIONS
Social support can directly affect anxiety symptoms among rural primary school students and can also indirectly influence anxiety symptoms through the chain mediating effects of family care and emotional management. These findings provide scientific evidence for the prevention of anxiety in primary school students from multiple perspectives.
Humans
;
Female
;
Male
;
Social Support
;
Anxiety/etiology*
;
Child
;
Students/psychology*
;
Emotions
;
Logistic Models
8.Effects of Focused Solution Model Nursing on quality of life and negative emotions of prostate cancer patients.
Lei YU ; Ting-Ling ZHANG ; Wen-Fang CHEN ; Xiu-Qin YE ; Jie LIU ; Qian MENG ; Ying-Chun HUANG ; Song XU
National Journal of Andrology 2025;31(8):723-727
OBJECTIVE:
To analyze the effects of the Focused Solution Model Nursing intervention on quality of life, negative emotions of the patients with prostate cancer. Methods: A total of 82 prostate cancer patients who were diagnosed and treated at the General Hospital of Eastern Theater Command between September 2022 and September 2024 were included and randomly divided into study group and control group by the method of random number table, with 41 patients in each group. The patients in the study group were treated with Focused Solution Model Nursing intervention. And the routine care was used in the control group The quality of life and negative emotions were compared between the two groups by using the scales of World Health Organization Quality of Life-Brief (WHOQOL-BREF), HAMA and HAMD.
RESULTS:
Compared to the control group, the patients in the study group exhibited significantly higher scores in the physiological, psychological, environmental, and social relationship domains of the WHOQOL-BREF scale (P<0.05). The scores of HAMA and HAMD in study group were lower than those of the control group (P<0.05). Additionally, all subscales of the Social Impact Scale including social exclusion, internalized shame, social isolation and economic discrimination were significantly lower than those of the study group (P<0.05).
CONCLUSION
Focused Solution Model Nursing intervention can effectively improve the quality of life and negative emotions of the prostate cancer patients in the clinical treatment.
Humans
;
Male
;
Quality of Life
;
Prostatic Neoplasms/nursing*
;
Emotions
;
Surveys and Questionnaires
;
Middle Aged
9.Characteristics of the amygdala and its subregions in premenstrual syndrome/premenstrual dysphoric disorder patients.
Ming CHENG ; Baoyi LI ; Zhen ZHANG ; Zhaoshu JIANG ; Jie YANG ; Peng JIANG ; Zhonghao YUAN
Journal of Central South University(Medical Sciences) 2025;50(3):492-500
Premenstrual dysphoric disorder (PMDD) is considered a severe form of premenstrual syndrome (PMS). As a key brain region involved in emotional regulation and stress responses, the amygdala has been implicated in the pathogenesis of PMS/PMDD. The amygdala is composed of multiple subregions, each playing distinct roles in emotion, memory, and stress responses, and forms complex brain areas. Summarizing the interconnections among amygdala, subregions and their connectivity with external areas, and exploringt the neuroimaging characteristics of the amygdala, as well as changes in its neural circuits and brain networks in these patients, will help provide a theoretical foundation for targeted modulation of amygdala function in the treatment of PMS/PMDD.
Humans
;
Amygdala/diagnostic imaging*
;
Female
;
Premenstrual Dysphoric Disorder/pathology*
;
Premenstrual Syndrome/pathology*
;
Emotions/physiology*
;
Magnetic Resonance Imaging
10.Construction of recognition models for subthreshold depression based on multiple machine learning algorithms and vocal emotional characteristics.
Meimei CHEN ; Yang WANG ; Huangwei LEI ; Fei ZHANG ; Ruina HUANG ; Zhaoyang YANG
Journal of Southern Medical University 2025;45(4):711-717
OBJECTIVES:
To construct vocal recognition classification models using 6 machine learning algorithms and vocal emotional characteristics of individuals with subthreshold depression to facilitate early identification of subthreshold depression.
METHODS:
We collected voice data from both normal individuals and participants with subthreshold depression by asking them to read specifically chosen words and texts. From each voice sample, 384-dimensional vocal emotional feature variables were extracted, including energy feature, Meir frequency cepstrum coefficient, zero cross rate feature, sound probability feature, fundamental frequency feature, difference feature. The Recursive Feature Elimination (RFE) method was employed to select voice feature variables. Classification models were then built using the machine learning algorithms Adaptive Boosting (AdaBoost), Random Forest (RF), Linear Discriminant Analysis (LDA), Logistic Regression (LR), Lasso Regression (LRLasso), and Support Vector Machine (SVM), and the performance of these models was evaluated. To assess generalization capability of the models, we used real-world speech data to evaluate the best speech recognition classification model.
RESULTS:
The AdaBoost, RF, and LDA models achieved high prediction accuracies of 100%, 100%, and 93.3% on word-reading speech test set, respectively. In the text-reading speech test set, the accuracies of the AdaBoost, RF, and LDA models were 90%, 80%, and 90%, respectively, while the accuracies of the other 3 models were all below 80%. On real-world word-reading and text-reading speech data, the classification models using AdaBoost and Random Forest still achieved high predictive accuracies (91.7% and 80.6% for AdaBoost and 86.1% and 77.8% for Random, respectively).
CONCLUSIONS
Analyzing vocal emotional characteristics allows effective identification of individuals with subthreshold depression. The AdaBoost and RF models show excellent performance for classifying subthreshold depression individuals, and may thus potentially offer valuable assistance in the clinical and research settings.
Humans
;
Machine Learning
;
Emotions
;
Depression/diagnosis*
;
Algorithms
;
Voice
;
Support Vector Machine
;
Male
;
Female

Result Analysis
Print
Save
E-mail