1.The joint analysis of heart health and mental health based on continual learning.
Hongxiang GAO ; Zhipeng CAI ; Jianqing LI ; Chengyu LIU
Journal of Biomedical Engineering 2025;42(1):1-8
Cardiovascular diseases and psychological disorders represent two major threats to human physical and mental health. Research on electrocardiogram (ECG) signals offers valuable opportunities to address these issues. However, existing methods are constrained by limitations in understanding ECG features and transferring knowledge across tasks. To address these challenges, this study developed a multi-resolution feature encoding network based on residual networks, which effectively extracted local morphological features and global rhythm features of ECG signals, thereby enhancing feature representation. Furthermore, a model compression-based continual learning method was proposed, enabling the structured transfer of knowledge from simpler tasks to more complex ones, resulting in improved performance in downstream tasks. The multi-resolution learning model demonstrated superior or comparable performance to state-of-the-art algorithms across five datasets, including tasks such as ECG QRS complex detection, arrhythmia classification, and emotion classification. The continual learning method achieved significant improvements over conventional training approaches in cross-domain, cross-task, and incremental data scenarios. These results highlight the potential of the proposed method for effective cross-task knowledge transfer in ECG analysis and offer a new perspective for multi-task learning using ECG signals.
Humans
;
Electrocardiography/methods*
;
Mental Health
;
Algorithms
;
Signal Processing, Computer-Assisted
;
Machine Learning
;
Arrhythmias, Cardiac/diagnosis*
;
Cardiovascular Diseases
;
Neural Networks, Computer
;
Mental Disorders
2.Research on motor imagery recognition based on feature fusion and transfer adaptive boosting.
Yuxin ZHANG ; Chenrui ZHANG ; Shihao SUN ; Guizhi XU
Journal of Biomedical Engineering 2025;42(1):9-16
This paper proposes a motor imagery recognition algorithm based on feature fusion and transfer adaptive boosting (TrAdaboost) to address the issue of low accuracy in motor imagery (MI) recognition across subjects, thereby increasing the reliability of MI-based brain-computer interfaces (BCI) for cross-individual use. Using the autoregressive model, power spectral density and discrete wavelet transform, time-frequency domain features of MI can be obtained, while the filter bank common spatial pattern is used to extract spatial domain features, and multi-scale dispersion entropy is employed to extract nonlinear features. The IV-2a dataset from the 4 th International BCI Competition was used for the binary classification task, with the pattern recognition model constructed by combining the improved TrAdaboost integrated learning algorithm with support vector machine (SVM), k nearest neighbor (KNN), and mind evolutionary algorithm-based back propagation (MEA-BP) neural network. The results show that the SVM-based TrAdaboost integrated learning algorithm has the best performance when 30% of the target domain instance data is migrated, with an average classification accuracy of 86.17%, a Kappa value of 0.723 3, and an AUC value of 0.849 8. These results suggest that the algorithm can be used to recognize MI signals across individuals, providing a new way to improve the generalization capability of BCI recognition models.
Brain-Computer Interfaces
;
Humans
;
Support Vector Machine
;
Algorithms
;
Neural Networks, Computer
;
Imagination/physiology*
;
Pattern Recognition, Automated/methods*
;
Electroencephalography
;
Wavelet Analysis
3.Research on emotion recognition methods based on multi-modal physiological signal feature fusion.
Zhiwen ZHANG ; Naigong YU ; Yan BIAN ; Jinhan YAN
Journal of Biomedical Engineering 2025;42(1):17-23
Emotion classification and recognition is a crucial area in emotional computing. Physiological signals, such as electroencephalogram (EEG), provide an accurate reflection of emotions and are difficult to disguise. However, emotion recognition still faces challenges in single-modal signal feature extraction and multi-modal signal integration. This study collected EEG, electromyogram (EMG), and electrodermal activity (EDA) signals from participants under three emotional states: happiness, sadness, and fear. A feature-weighted fusion method was applied for integrating the signals, and both support vector machine (SVM) and extreme learning machine (ELM) were used for classification. The results showed that the classification accuracy was highest when the fusion weights were set to EEG 0.7, EMG 0.15, and EDA 0.15, achieving accuracy rates of 80.19% and 82.48% for SVM and ELM, respectively. These rates represented an improvement of 5.81% and 2.95% compared to using EEG alone. This study offers methodological support for emotion classification and recognition using multi-modal physiological signals.
Humans
;
Emotions/physiology*
;
Electroencephalography
;
Support Vector Machine
;
Electromyography
;
Signal Processing, Computer-Assisted
;
Galvanic Skin Response/physiology*
;
Machine Learning
;
Male
4.Audiovisual emotion recognition based on a multi-head cross attention mechanism.
Ziqiong WANG ; Dechun ZHAO ; Lu QIN ; Yi CHEN ; Yuchen SHEN
Journal of Biomedical Engineering 2025;42(1):24-31
In audiovisual emotion recognition, representational learning is a research direction receiving considerable attention, and the key lies in constructing effective affective representations with both consistency and variability. However, there are still many challenges to accurately realize affective representations. For this reason, in this paper we proposed a cross-modal audiovisual recognition model based on a multi-head cross-attention mechanism. The model achieved fused feature and modality alignment through a multi-head cross-attention architecture, and adopted a segmented training strategy to cope with the modality missing problem. In addition, a unimodal auxiliary loss task was designed and shared parameters were used in order to preserve the independent information of each modality. Ultimately, the model achieved macro and micro F1 scores of 84.5% and 88.2%, respectively, on the crowdsourced annotated multimodal emotion dataset of actor performances (CREMA-D). The model in this paper can effectively capture intra- and inter-modal feature representations of audio and video modalities, and successfully solves the unity problem of the unimodal and multimodal emotion recognition frameworks, which provides a brand-new solution to the audiovisual emotion recognition.
Emotions
;
Humans
;
Attention
;
Algorithms
5.Dynamic continuous emotion recognition method based on electroencephalography and eye movement signals.
Yangmeng ZOU ; Lilin JIE ; Mingxun WANG ; Yong LIU ; Junhua LI
Journal of Biomedical Engineering 2025;42(1):32-41
Existing emotion recognition research is typically limited to static laboratory settings and has not fully handle the changes in emotional states in dynamic scenarios. To address this problem, this paper proposes a method for dynamic continuous emotion recognition based on electroencephalography (EEG) and eye movement signals. Firstly, an experimental paradigm was designed to cover six dynamic emotion transition scenarios including happy to calm, calm to happy, sad to calm, calm to sad, nervous to calm, and calm to nervous. EEG and eye movement data were collected simultaneously from 20 subjects to fill the gap in current multimodal dynamic continuous emotion datasets. In the valence-arousal two-dimensional space, emotion ratings for stimulus videos were performed every five seconds on a scale of 1 to 9, and dynamic continuous emotion labels were normalized. Subsequently, frequency band features were extracted from the preprocessed EEG and eye movement data. A cascade feature fusion approach was used to effectively combine EEG and eye movement features, generating an information-rich multimodal feature vector. This feature vector was input into four regression models including support vector regression with radial basis function kernel, decision tree, random forest, and K-nearest neighbors, to develop the dynamic continuous emotion recognition model. The results showed that the proposed method achieved the lowest mean square error for valence and arousal across the six dynamic continuous emotions. This approach can accurately recognize various emotion transitions in dynamic situations, offering higher accuracy and robustness compared to using either EEG or eye movement signals alone, making it well-suited for practical applications.
Humans
;
Electroencephalography/methods*
;
Emotions/physiology*
;
Eye Movements/physiology*
;
Signal Processing, Computer-Assisted
;
Support Vector Machine
;
Algorithms
6.Prediction method of paroxysmal atrial fibrillation based on multimodal feature fusion.
Yongjian LI ; Lei LIU ; Meng CHEN ; Yixue LI ; Yuchen WANG ; Shoushui WEI
Journal of Biomedical Engineering 2025;42(1):42-48
The risk prediction of paroxysmal atrial fibrillation (PAF) is a challenge in the field of biomedical engineering. This study integrated the advantages of machine learning feature engineering and end-to-end modeling of deep learning to propose a PAF risk prediction method based on multimodal feature fusion. Additionally, the study utilized four different feature selection methods and Pearson correlation analysis to determine the optimal multimodal feature set, and employed random forest for PAF risk assessment. The proposed method achieved accuracy of (92.3 ± 2.1)% and F1 score of (91.6 ± 2.9)% in a public dataset. In a clinical dataset, it achieved accuracy of (91.4 ± 2.0)% and F1 score of (90.8 ± 2.4)%. The method demonstrates generalization across multi-center datasets and holds promising clinical application prospects.
Humans
;
Atrial Fibrillation/diagnosis*
;
Machine Learning
;
Deep Learning
;
Risk Assessment/methods*
7.Research on arrhythmia classification algorithm based on adaptive multi-feature fusion network.
Mengmeng HUANG ; Mingfeng JIANG ; Yang LI ; Xiaoyu HE ; Zefeng WANG ; Yongquan WU ; Wei KE
Journal of Biomedical Engineering 2025;42(1):49-56
Deep learning method can be used to automatically analyze electrocardiogram (ECG) data and rapidly implement arrhythmia classification, which provides significant clinical value for the early screening of arrhythmias. How to select arrhythmia features effectively under limited abnormal sample supervision is an urgent issue to address. This paper proposed an arrhythmia classification algorithm based on an adaptive multi-feature fusion network. The algorithm extracted RR interval features from ECG signals, employed one-dimensional convolutional neural network (1D-CNN) to extract time-domain deep features, employed Mel frequency cepstral coefficients (MFCC) and two-dimensional convolutional neural network (2D-CNN) to extract frequency-domain deep features. The features were fused using adaptive weighting strategy for arrhythmia classification. The paper used the arrhythmia database jointly developed by the Massachusetts Institute of Technology and Beth Israel Hospital (MIT-BIH) and evaluated the algorithm under the inter-patient paradigm. Experimental results demonstrated that the proposed algorithm achieved an average precision of 75.2%, an average recall of 70.1% and an average F 1-score of 71.3%, demonstrating high classification accuracy and being able to provide algorithmic support for arrhythmia classification in wearable devices.
Humans
;
Arrhythmias, Cardiac/diagnosis*
;
Algorithms
;
Electrocardiography/methods*
;
Neural Networks, Computer
;
Signal Processing, Computer-Assisted
;
Deep Learning
;
Classification Algorithms
8.Research on intelligent fetal heart monitoring model based on deep active learning.
Bin QUAN ; Yajing HUANG ; Yanfang LI ; Qinqun CHEN ; Honglai ZHANG ; Li LI ; Guiqing LIU ; Hang WEI
Journal of Biomedical Engineering 2025;42(1):57-64
Cardiotocography (CTG) is a non-invasive and important tool for diagnosing fetal distress during pregnancy. To meet the needs of intelligent fetal heart monitoring based on deep learning, this paper proposes a TWD-MOAL deep active learning algorithm based on the three-way decision (TWD) theory and multi-objective optimization Active Learning (MOAL). During the training process of a convolutional neural network (CNN) classification model, the algorithm incorporates the TWD theory to select high-confidence samples as pseudo-labeled samples in a fine-grained batch processing mode, meanwhile low-confidence samples annotated by obstetrics experts were also considered. The TWD-MOAL algorithm proposed in this paper was validated on a dataset of 16 355 prenatal CTG records collected by our group. Experimental results showed that the algorithm proposed in this paper achieved an accuracy of 80.63% using only 40% of the labeled samples, and in terms of various indicators, it performed better than the existing active learning algorithms under other frameworks. The study has shown that the intelligent fetal heart monitoring model based on TWD-MOAL proposed in this paper is reasonable and feasible. The algorithm significantly reduces the time and cost of labeling by obstetric experts and effectively solves the problem of data imbalance in CTG signal data in clinic, which is of great significance for assisting obstetrician in interpretations CTG signals and realizing intelligence fetal monitoring.
Humans
;
Pregnancy
;
Female
;
Cardiotocography/methods*
;
Deep Learning
;
Neural Networks, Computer
;
Algorithms
;
Fetal Monitoring/methods*
;
Heart Rate, Fetal
;
Fetal Distress/diagnosis*
;
Fetal Heart/physiology*
9.Research and application implementation of the Internet of Things scheme for intensive care unit medical equipment.
Hong LIANG ; Jipeng SUN ; Yong FAN ; Desen CAO ; Kunlun HE ; Zhengbo ZHANG ; Zhi MAO
Journal of Biomedical Engineering 2025;42(1):65-72
The intensive care unit (ICU) is a highly equipment-intensive area with a wide variety of medical devices, and the accuracy and timeliness of medical equipment data collection are highly demanded. The integration of the Internet of Things (IoT) into ICU medical devices is of great significance for enhancing the quality of medical care and nursing, as well as for the advancement of digital and intelligent ICUs. This study focuses on the construction of the IOT for ICU medical devices and proposes innovative solutions, including the overall architecture design, devices connection, data collection, data standardization, platform construction and application implementation. The overall architecture was designed according to the perception layer, network layer, platform layer and application layer; three modes of device connection and data acquisition were proposed; data standardization based on Integrating the Healthcare Enterprise-Patient Care Device (IHE-PCD) was proposed. This study was practically verified in the Chinese People's Liberation Army General Hospital, a total of 122 devices in four ICU wards were connected to the IoT, storing 21.76 billion data items, with a data volume of 12.5 TB, which solved the problem of difficult systematic medical equipment data collection and data integration in ICUs. The remarkable results achieved proved the feasibility and reliability of this study. The research results of this paper provide a solution reference for the construction of hospital ICU IoT, offer more abundant data for medical big data analysis research, which can support the improvement of ICU medical services and promote the development of ICU to digitalization and intelligence.
Intensive Care Units
;
Internet of Things
;
Humans
;
Internet
;
Data Collection
10.Effect of repeated transcranial magnetic stimulation on excitability of glutaminergic neurons and gamma-aminobutyric neurons in mouse hippocampus.
Jiale WANG ; Chong DING ; Rui FU ; Ze ZHANG ; Junqiao ZHAO ; Haijun ZHU
Journal of Biomedical Engineering 2025;42(1):73-81
Repeated transcranial magnetic stimulation (rTMS) is one of the commonly used brain stimulation techniques. In order to investigate the effects of rTMS on the excitability of different types of neurons, this study is conducted to investigate the effects of rTMS on the cognitive function of mice and the excitability of hippocampal glutaminergic neurons and gamma-aminobutyric neurons from the perspective of electrophysiology. In this study, mice were randomly divided into glutaminergic control group, glutaminergic magnetic stimulation group, gamma-aminobutyric acid energy control group, and gamma-aminobutyric acid magnetic stimulation group. The four groups of mice were injected with adeno-associated virus to label two types of neurons and were implanted optical fiber. The stimulation groups received 14 days of stimulation and the control groups received 14 days of pseudo-stimulation. The fluorescence intensity of calcium ions in mice was recorded by optical fiber system. Behavioral experiments were conducted to explore the changes of cognitive function in mice. The patch-clamp system was used to detect the changes of neuronal action potential characteristics. The results showed that rTMS significantly improved the cognitive function of mice, increased the amplitude of calcium fluorescence of glutamergic neurons and gamma-aminobutyric neurons in the hippocampus, and enhanced the action potential related indexes of glutamergic neurons and gamma-aminobutyric neurons. The results suggest that rTMS can improve the cognitive ability of mice by enhancing the excitability of hippocampal glutaminergic neurons and gamma-aminobutyric neurons.
Animals
;
Mice
;
Hippocampus/cytology*
;
Transcranial Magnetic Stimulation
;
Neurons/physiology*
;
Male
;
Cognition/physiology*
;
gamma-Aminobutyric Acid/metabolism*
;
Action Potentials/physiology*

Result Analysis
Print
Save
E-mail