1.Scientific analysis and usage reassessment of suspected medicinal cinnabar unearthed from Mawangdui Tomb No.3 of the Han Dynasty.
Ning-Ning XU ; Ting-Yan REN ; Ming-Jie LI ; Pan XIAO ; Guo-Hui SHEN ; Ji-Qing BAI ; Qi LIU
China Journal of Chinese Materia Medica 2025;50(11):2915-2923
Cinnabar(HgS) was widely used in ancient times for medicinal purposes, religious rituals, and pigments. A group of bright red powdery clumps was excavated from Mawangdui Tomb No.3 of the Han Dynasty. Early studies considered the clumps as evidence of cinnabar's medicinal use during the Qin-Han period. This study employed a range of archaeometric techniques, including extended-depth-of-field stereo imaging, micro-CT, scanning electron microscopy-energy dispersive spectroscopy, Raman spectroscopy, and Fourier transform infrared spectrometry FTIR, to systematically analyze the material composition and structural characteristics of these remains. The results revealed that the cinnabar particles were granular, finely ground, and tightly bound to silk matrix, with no detectable excipients typically associated with medicinal formulations. Micro-CT imaging indicated a well-preserved textile structure, with clear signs of sedimentary accumulation and mechanical damage. Based on historical and archaeological studies, this study suggested that these remains were more likely degraded accumulations of cinnabar-colored silk textiles rather than medicinal cinnabar. By clarifying the diversity of ancient cinnabar applications and preservation states, this study provides new insights for the archaeological identification of mineral medicinal materials and contributes to the standardized study of Chinese medicinal materials and understanding of the historical use of cinnabar.
History, Ancient
;
China
;
Humans
;
Medicine, Chinese Traditional/history*
;
Archaeology
;
Drugs, Chinese Herbal/history*
;
Spectroscopy, Fourier Transform Infrared
;
Spectrum Analysis, Raman
;
Mercury Compounds
2.The joint analysis of heart health and mental health based on continual learning.
Hongxiang GAO ; Zhipeng CAI ; Jianqing LI ; Chengyu LIU
Journal of Biomedical Engineering 2025;42(1):1-8
Cardiovascular diseases and psychological disorders represent two major threats to human physical and mental health. Research on electrocardiogram (ECG) signals offers valuable opportunities to address these issues. However, existing methods are constrained by limitations in understanding ECG features and transferring knowledge across tasks. To address these challenges, this study developed a multi-resolution feature encoding network based on residual networks, which effectively extracted local morphological features and global rhythm features of ECG signals, thereby enhancing feature representation. Furthermore, a model compression-based continual learning method was proposed, enabling the structured transfer of knowledge from simpler tasks to more complex ones, resulting in improved performance in downstream tasks. The multi-resolution learning model demonstrated superior or comparable performance to state-of-the-art algorithms across five datasets, including tasks such as ECG QRS complex detection, arrhythmia classification, and emotion classification. The continual learning method achieved significant improvements over conventional training approaches in cross-domain, cross-task, and incremental data scenarios. These results highlight the potential of the proposed method for effective cross-task knowledge transfer in ECG analysis and offer a new perspective for multi-task learning using ECG signals.
Humans
;
Electrocardiography/methods*
;
Mental Health
;
Algorithms
;
Signal Processing, Computer-Assisted
;
Machine Learning
;
Arrhythmias, Cardiac/diagnosis*
;
Cardiovascular Diseases
;
Neural Networks, Computer
;
Mental Disorders
3.Research on motor imagery recognition based on feature fusion and transfer adaptive boosting.
Yuxin ZHANG ; Chenrui ZHANG ; Shihao SUN ; Guizhi XU
Journal of Biomedical Engineering 2025;42(1):9-16
This paper proposes a motor imagery recognition algorithm based on feature fusion and transfer adaptive boosting (TrAdaboost) to address the issue of low accuracy in motor imagery (MI) recognition across subjects, thereby increasing the reliability of MI-based brain-computer interfaces (BCI) for cross-individual use. Using the autoregressive model, power spectral density and discrete wavelet transform, time-frequency domain features of MI can be obtained, while the filter bank common spatial pattern is used to extract spatial domain features, and multi-scale dispersion entropy is employed to extract nonlinear features. The IV-2a dataset from the 4 th International BCI Competition was used for the binary classification task, with the pattern recognition model constructed by combining the improved TrAdaboost integrated learning algorithm with support vector machine (SVM), k nearest neighbor (KNN), and mind evolutionary algorithm-based back propagation (MEA-BP) neural network. The results show that the SVM-based TrAdaboost integrated learning algorithm has the best performance when 30% of the target domain instance data is migrated, with an average classification accuracy of 86.17%, a Kappa value of 0.723 3, and an AUC value of 0.849 8. These results suggest that the algorithm can be used to recognize MI signals across individuals, providing a new way to improve the generalization capability of BCI recognition models.
Brain-Computer Interfaces
;
Humans
;
Support Vector Machine
;
Algorithms
;
Neural Networks, Computer
;
Imagination/physiology*
;
Pattern Recognition, Automated/methods*
;
Electroencephalography
;
Wavelet Analysis
4.Research on emotion recognition methods based on multi-modal physiological signal feature fusion.
Zhiwen ZHANG ; Naigong YU ; Yan BIAN ; Jinhan YAN
Journal of Biomedical Engineering 2025;42(1):17-23
Emotion classification and recognition is a crucial area in emotional computing. Physiological signals, such as electroencephalogram (EEG), provide an accurate reflection of emotions and are difficult to disguise. However, emotion recognition still faces challenges in single-modal signal feature extraction and multi-modal signal integration. This study collected EEG, electromyogram (EMG), and electrodermal activity (EDA) signals from participants under three emotional states: happiness, sadness, and fear. A feature-weighted fusion method was applied for integrating the signals, and both support vector machine (SVM) and extreme learning machine (ELM) were used for classification. The results showed that the classification accuracy was highest when the fusion weights were set to EEG 0.7, EMG 0.15, and EDA 0.15, achieving accuracy rates of 80.19% and 82.48% for SVM and ELM, respectively. These rates represented an improvement of 5.81% and 2.95% compared to using EEG alone. This study offers methodological support for emotion classification and recognition using multi-modal physiological signals.
Humans
;
Emotions/physiology*
;
Electroencephalography
;
Support Vector Machine
;
Electromyography
;
Signal Processing, Computer-Assisted
;
Galvanic Skin Response/physiology*
;
Machine Learning
;
Male
5.Dynamic continuous emotion recognition method based on electroencephalography and eye movement signals.
Yangmeng ZOU ; Lilin JIE ; Mingxun WANG ; Yong LIU ; Junhua LI
Journal of Biomedical Engineering 2025;42(1):32-41
Existing emotion recognition research is typically limited to static laboratory settings and has not fully handle the changes in emotional states in dynamic scenarios. To address this problem, this paper proposes a method for dynamic continuous emotion recognition based on electroencephalography (EEG) and eye movement signals. Firstly, an experimental paradigm was designed to cover six dynamic emotion transition scenarios including happy to calm, calm to happy, sad to calm, calm to sad, nervous to calm, and calm to nervous. EEG and eye movement data were collected simultaneously from 20 subjects to fill the gap in current multimodal dynamic continuous emotion datasets. In the valence-arousal two-dimensional space, emotion ratings for stimulus videos were performed every five seconds on a scale of 1 to 9, and dynamic continuous emotion labels were normalized. Subsequently, frequency band features were extracted from the preprocessed EEG and eye movement data. A cascade feature fusion approach was used to effectively combine EEG and eye movement features, generating an information-rich multimodal feature vector. This feature vector was input into four regression models including support vector regression with radial basis function kernel, decision tree, random forest, and K-nearest neighbors, to develop the dynamic continuous emotion recognition model. The results showed that the proposed method achieved the lowest mean square error for valence and arousal across the six dynamic continuous emotions. This approach can accurately recognize various emotion transitions in dynamic situations, offering higher accuracy and robustness compared to using either EEG or eye movement signals alone, making it well-suited for practical applications.
Humans
;
Electroencephalography/methods*
;
Emotions/physiology*
;
Eye Movements/physiology*
;
Signal Processing, Computer-Assisted
;
Support Vector Machine
;
Algorithms
6.Research on arrhythmia classification algorithm based on adaptive multi-feature fusion network.
Mengmeng HUANG ; Mingfeng JIANG ; Yang LI ; Xiaoyu HE ; Zefeng WANG ; Yongquan WU ; Wei KE
Journal of Biomedical Engineering 2025;42(1):49-56
Deep learning method can be used to automatically analyze electrocardiogram (ECG) data and rapidly implement arrhythmia classification, which provides significant clinical value for the early screening of arrhythmias. How to select arrhythmia features effectively under limited abnormal sample supervision is an urgent issue to address. This paper proposed an arrhythmia classification algorithm based on an adaptive multi-feature fusion network. The algorithm extracted RR interval features from ECG signals, employed one-dimensional convolutional neural network (1D-CNN) to extract time-domain deep features, employed Mel frequency cepstral coefficients (MFCC) and two-dimensional convolutional neural network (2D-CNN) to extract frequency-domain deep features. The features were fused using adaptive weighting strategy for arrhythmia classification. The paper used the arrhythmia database jointly developed by the Massachusetts Institute of Technology and Beth Israel Hospital (MIT-BIH) and evaluated the algorithm under the inter-patient paradigm. Experimental results demonstrated that the proposed algorithm achieved an average precision of 75.2%, an average recall of 70.1% and an average F 1-score of 71.3%, demonstrating high classification accuracy and being able to provide algorithmic support for arrhythmia classification in wearable devices.
Humans
;
Arrhythmias, Cardiac/diagnosis*
;
Algorithms
;
Electrocardiography/methods*
;
Neural Networks, Computer
;
Signal Processing, Computer-Assisted
;
Deep Learning
;
Classification Algorithms
7.A review of deep learning methods for non-contact heart rate measurement based on facial videos.
Shuyue GUAN ; Yimou LYU ; Yongchun LI ; Chengzhi XIA ; Lin QI ; Lisheng XU
Journal of Biomedical Engineering 2025;42(1):197-204
Heart rate is a crucial indicator of human health with significant physiological importance. Traditional contact methods for measuring heart rate, such as electrocardiograph or wristbands, may not always meet the need for convenient health monitoring. Remote photoplethysmography (rPPG) provides a non-contact method for measuring heart rate and other physiological indicators by analyzing blood volume pulse signals. This approach is non-invasive, does not require direct contact, and allows for long-term healthcare monitoring. Deep learning has emerged as a powerful tool for processing complex image and video data, and has been increasingly employed to extract heart rate signals remotely. This article reviewed the latest research advancements in rPPG-based heart rate measurement using deep learning, summarized available public datasets, and explored future research directions and potential advancements in non-contact heart rate measurement.
Humans
;
Deep Learning
;
Heart Rate/physiology*
;
Photoplethysmography/methods*
;
Video Recording
;
Face
;
Monitoring, Physiologic/methods*
;
Signal Processing, Computer-Assisted
8.Cross-session motor imagery-electroencephalography decoding with Riemannian spatial filtering and domain adaptation.
Lincong PAN ; Xinwei SUN ; Kun WANG ; Yupei CAO ; Minpeng XU ; Dong MING
Journal of Biomedical Engineering 2025;42(2):272-279
Motor imagery (MI) is a mental process that can be recognized by electroencephalography (EEG) without actual movement. It has significant research value and application potential in the field of brain-computer interface (BCI) technology. To address the challenges posed by the non-stationary nature and low signal-to-noise ratio of MI-EEG signals, this study proposed a Riemannian spatial filtering and domain adaptation (RSFDA) method for improving the accuracy and efficiency of cross-session MI-BCI classification tasks. The approach addressed the issue of inconsistent data distribution between source and target domains through a multi-module collaborative framework, which enhanced the generalization capability of cross-session MI-EEG classification models. Comparative experiments were conducted on three public datasets to evaluate RSFDA against eight existing methods in terms of classification accuracy and computational efficiency. The experimental results demonstrated that RSFDA achieved an average classification accuracy of 79.37%, outperforming the state-of-the-art deep learning method Tensor-CSPNet (76.46%) by 2.91% ( P < 0.01). Furthermore, the proposed method showed significantly lower computational costs, requiring only approximately 3 minutes of average training time compared to Tensor-CSPNet's 25 minutes, representing a reduction of 22 minutes. These findings indicate that the RSFDA method demonstrates superior performance in cross-session MI-EEG classification tasks by effectively balancing accuracy and efficiency. However, its applicability in complex transfer learning scenarios remains to be further investigated.
Electroencephalography/methods*
;
Brain-Computer Interfaces
;
Humans
;
Imagination/physiology*
;
Signal Processing, Computer-Assisted
;
Movement/physiology*
;
Signal-To-Noise Ratio
;
Deep Learning
;
Algorithms
9.A portable steady-state visual evoked potential brain-computer interface system for smart healthcare.
Yisen ZHU ; Zhouyu JI ; Shuran LI ; Haicheng WANG ; Yunfa FU ; Hongtao WANG
Journal of Biomedical Engineering 2025;42(3):455-463
This paper realized a portable brain-computer interface (BCI) system tailored for smart healthcare. Through the decoding of steady-state visual evoked potential (SSVEP), this system can rapidly and accurately identify the intentions of subjects, thereby meeting the practical demands of daily medical scenarios. Firstly, an SSVEP stimulation interface and an electroencephalogram (EEG) signal acquisition software were designed, which enable the system to execute multi-target and multi-task operations while also incorporating data visualization functionality. Secondly, the EEG signals recorded from the occipital region were decomposed into eight sub-frequency bands using filter bank canonical correlation analysis (FBCCA). Subsequently, the similarity between each sub-band signal and the reference signals was computed to achieve efficient SSVEP decoding. Finally, 15 subjects were recruited to participate in the online evaluation of the system. The experimental results indicated that in real-world scenarios, the system achieved an average accuracy of 85.19% in identifying the intentions of the subjects, and an information transfer rate (ITR) of 37.52 bit/min. This system was awarded third prize in the Visual BCI Innovation Application Development competition at the 2024 World Robot Contest, validating its effectiveness. In conclusion, this study has developed a portable, multifunctional SSVEP online decoding system, providing an effective approach for human-computer interaction in smart healthcare.
Brain-Computer Interfaces
;
Humans
;
Evoked Potentials, Visual/physiology*
;
Electroencephalography
;
Signal Processing, Computer-Assisted
;
Software
;
Adult
;
Male
10.Performance evaluation of a wearable steady-state visual evoked potential based brain-computer interface in real-life scenario.
Xiaodong LI ; Xiang CAO ; Junlin WANG ; Weijie ZHU ; Yong HUANG ; Feng WAN ; Yong HU
Journal of Biomedical Engineering 2025;42(3):464-472
Brain-computer interface (BCI) has high application value in the field of healthcare. However, in practical clinical applications, convenience and system performance should be considered in the use of BCI. Wearable BCIs are generally with high convenience, but their performance in real-life scenario needs to be evaluated. This study proposed a wearable steady-state visual evoked potential (SSVEP)-based BCI system equipped with a small-sized electroencephalogram (EEG) collector and a high-performance training-free decoding algorithm. Ten healthy subjects participated in the test of BCI system under simplified experimental preparation. The results showed that the average classification accuracy of this BCI was 94.10% for 40 targets, and there was no significant difference compared to the dataset collected under the laboratory condition. The system achieved a maximum information transfer rate (ITR) of 115.25 bit/min with 8-channel signal and 98.49 bit/min with 4-channel signal, indicating that the 4-channel solution can be used as an option for the few-channel BCI. Overall, this wearable SSVEP-BCI can achieve good performance in real-life scenario, which helps to promote BCI technology in clinical practice.
Brain-Computer Interfaces
;
Humans
;
Evoked Potentials, Visual/physiology*
;
Electroencephalography
;
Wearable Electronic Devices
;
Algorithms
;
Signal Processing, Computer-Assisted
;
Adult
;
Male

Result Analysis
Print
Save
E-mail