1.Multi-source adversarial adaptation with calibration for electroencephalogram-based classification of meditation and resting states.
Mingyu GOU ; Haolong YIN ; Tianzhen CHEN ; Fei CHENG ; Jiang DU ; Baoliang LYU ; Weilong ZHENG
Journal of Biomedical Engineering 2025;42(4):668-677
Meditation aims to guide individuals into a state of deep calm and focused attention, and in recent years, it has shown promising potential in the field of medical treatment. Numerous studies have demonstrated that electroencephalogram (EEG) patterns change during meditation, suggesting the feasibility of using deep learning techniques to monitor meditation states. However, significant inter-subject differences in EEG signals poses challenges to the performance of such monitoring systems. To address this issue, this study proposed a novel model-calibrated multi-source adversarial adaptation network (CMAAN). The model first trained multiple domain-adversarial neural networks in a pairwise manner between various source-domain individuals and the target-domain individual. These networks were then integrated through a calibration process using a small amount of labeled data from the target domain to enhance performance. We evaluated the proposed model on an EEG dataset collected from 18 subjects undergoing methamphetamine rehabilitation. The model achieved a classification accuracy of 73.09%. Additionally, based on the learned model, we analyzed the key EEG frequency bands and brain regions involved in the meditation process. The proposed multi-source domain adaptation framework improves both the performance and robustness of EEG-based meditation monitoring and holds great promise for applications in biomedical informatics and clinical practice.
Humans
;
Electroencephalography/methods*
;
Meditation
;
Calibration
;
Neural Networks, Computer
;
Brain/physiology*
;
Rest/physiology*
;
Deep Learning
;
Signal Processing, Computer-Assisted
2.A method for emotion transition recognition using cross-modal feature fusion and global perception.
Lilin JIE ; Yangmeng ZOU ; Zhengxiu LI ; Baoliang LYU ; Weilong ZHENG ; Ming LI
Journal of Biomedical Engineering 2025;42(5):977-986
Current studies on electroencephalogram (EEG) emotion recognition primarily concentrate on discrete stimulus paradigms under controlled laboratory settings, which cannot adequately represent the dynamic transition characteristics of emotional states during multi-context interactions. To address this issue, this paper proposes a novel method for emotion transition recognition that leverages a cross-modal feature fusion and global perception network (CFGPN). Firstly, an experimental paradigm encompassing six types of emotion transition scenarios was designed, and EEG and eye movement data were simultaneously collected from 20 participants, each annotated with dynamic continuous emotion labels. Subsequently, deep canonical correlation analysis integrated with a cross-modal attention mechanism was employed to fuse features from EEG and eye movement signals, resulting in multimodal feature vectors enriched with highly discriminative emotional information. These vectors are then input into a parallel hybrid architecture that combines convolutional neural networks (CNNs) and Transformers. The CNN is employed to capture local time-series features, whereas the Transformer leverages its robust global perception capabilities to effectively model long-range temporal dependencies, enabling accurate dynamic emotion transition recognition. The results demonstrate that the proposed method achieves the lowest mean square error in both valence and arousal recognition tasks on the dynamic emotion transition dataset and a classic multimodal emotion dataset. It exhibits superior recognition accuracy and stability when compared with five existing unimodal and six multimodal deep learning models. The approach enhances both adaptability and robustness in recognizing emotional state transitions in real-world scenarios, showing promising potential for applications in the field of biomedical engineering.
Humans
;
Emotions/physiology*
;
Electroencephalography
;
Neural Networks, Computer
;
Eye Movements
;
Perception
3.Development of DSA Information Management and Image Analysis System Based on Java Web.
Chao SUN ; Qingmin MENG ; Li WANG ; Jijin YAO ; Baoliang ZONG ; Yongxin GUO ; Qing JIAO
Chinese Journal of Medical Instrumentation 2019;43(5):348-351
OBJECTIVE:
To establish a digital subtraction angiography (DSA) information management and image analysis system to realize scientific management of DSA image information and efficient processing of image data.
METHODS:
Based on Java Web under Windows 7 environment, a dynamic Browser/Server mode system was constructed by JSP and Servlet on the network. Eclipse and MySQL were used as development tool and database development platform. Tomcat network information service was used as application server. Matlab codes were embedded to analyze DSA image.
RESULTS:
The system consists of five modules:image information management, image processing, image analysis, advanced retrieval and clinical data management. It may complete such process as storing, deleting, saving, analyzing of DSA image and basic information of patients.
CONCLUSIONS
The main interface of the system is user-friendly and easy to operate. The system will be helpful to the clinical, teaching and scientific research work related to DSA.
Angiography, Digital Subtraction
;
statistics & numerical data
;
Databases, Factual
;
Humans
;
Indonesia
;
Information Management
;
Internet
;
Software
;
User-Computer Interface
4.Expression of GABAR1 and NMDAR2B in aged rat cerebral temporal lopes after isoflurane inhalation
Gaoya CAO ; Bei WU ; Zhen XING ; Baoliang JIAO ; Fulong LI ; Jinliang TENG ; Xinsheng WANG
The Journal of Clinical Anesthesiology 2017;33(5):483-487
Objective To investigate the effects of different concentration and inhalation duration time of isoflurane on cognitive performance and the expression of GABAR1 and NMDAR2B in aged SD rat cerebral temporal lopes.Methods Aged male SD rats (9 months) were randomly divided into control group (n=10) and test group (n=80).The control group received air at room tempreture.Test groups were divided into four groups: group S1 (1.5%-2 h),group S2 (2.5%-2 h),group S3 (1.5%-4 h),group S4 (2.5%-4 h)according to isoflurane concentration and inhalation duration time.Every group was equally divided into two groups and Morris water maze test was performed day 1 and day 7 after isoflurane inhalation.Then the right temporal lobe was gathered and the mRNA transcription and protein expression of GABAR1 and NMDAR2B were detected by RT-PCR and Immunofluorescence technique.Results One day after isoflurane inhalation, accompanied with increased isoflurane concentration and inhalation duration, the spatial memory ability of every test group decreased continually, and the mRNA transcription and protein expression of GABAR1 increased and the mRNA transcription and protein expression of NMDAR2B decreased compared with control group (P<0.01).Seven days after isoflurane inhalation, the spatial memory ability of group S4 decreased, the mRNA transcription and protein expression of both GABAR1 increased, the mRNA transcription and protein expression of NMDAR2B decreased compared with control group and the other test groups (P<0.01).There was no significant difference between the control group and groups S1, S2, S3.Conclusion Continuous inhalation of isoflurane has great effects on spatial memory ability.And impaired spatial memory by isoflurane inhalation of high concentration with long duration is present in a long time.Thoses are related with the mRNA transcription and protein expressions of GABAR1 and NMDAR2B in cerebral temporal lope.

Result Analysis
Print
Save
E-mail