1.Development of an abdominal acupoint localization system based on AI deep learning.
Mo ZHANG ; Yuming LI ; Zongming SHI
Chinese Acupuncture & Moxibustion 2025;45(3):391-396
This study aims to develop an abdominal acupoint localization system based on computer vision and convolutional neural networks (CNNs). To address the challenge of abdominal acupoint localization, a multi-task CNNs architecture was constructed and trained to locate the Shenque (CV8) and human body boundaries. Based on the identified Shenque (CV8), the system further deduces key characteristics of four acupoints: Shangwan (CV13), Qugu (CV2), and bilateral Daheng (SP15). An affine transformation matrix is applied to accurately map image coordinates to an acupoint template space, achieving precise localization of abdominal acupoints. Testing has verified that this system can accurately identify and locate abdominal acupoints in images. The development of this localization system provides technical support for TCM remote education, diagnostic assistance, and advanced TCM equipment, such as intelligent acupuncture robots, facilitating the standardization and intelligent advancement of acupuncture.
Acupuncture Points
;
Humans
;
Deep Learning
;
Abdomen/diagnostic imaging*
;
Neural Networks, Computer
;
Acupuncture Therapy
;
Image Processing, Computer-Assisted
2.Large models in medical imaging: Advances and prospects.
Mengjie FANG ; Zipei WANG ; Sitian PAN ; Xin FENG ; Yunpeng ZHAO ; Dongzhi HOU ; Ling WU ; Xuebin XIE ; Xu-Yao ZHANG ; Jie TIAN ; Di DONG
Chinese Medical Journal 2025;138(14):1647-1664
Recent advances in large models demonstrate significant prospects for transforming the field of medical imaging. These models, including large language models, large visual models, and multimodal large models, offer unprecedented capabilities in processing and interpreting complex medical data across various imaging modalities. By leveraging self-supervised pretraining on vast unlabeled datasets, cross-modal representation learning, and domain-specific medical knowledge adaptation through fine-tuning, large models can achieve higher diagnostic accuracy and more efficient workflows for key clinical tasks. This review summarizes the concepts, methods, and progress of large models in medical imaging, highlighting their potential in precision medicine. The article first outlines the integration of multimodal data under large model technologies, approaches for training large models with medical datasets, and the need for robust evaluation metrics. It then explores how large models can revolutionize applications in critical tasks such as image segmentation, disease diagnosis, personalized treatment strategies, and real-time interactive systems, thus pushing the boundaries of traditional imaging analysis. Despite their potential, the practical implementation of large models in medical imaging faces notable challenges, including the scarcity of high-quality medical data, the need for optimized perception of imaging phenotypes, safety considerations, and seamless integration with existing clinical workflows and equipment. As research progresses, the development of more efficient, interpretable, and generalizable models will be critical to ensuring their reliable deployment across diverse clinical environments. This review aims to provide insights into the current state of the field and provide directions for future research to facilitate the broader adoption of large models in clinical practice.
Humans
;
Diagnostic Imaging/methods*
;
Precision Medicine/methods*
;
Image Processing, Computer-Assisted/methods*
3.Role of artificial intelligence in medical image analysis.
Lu WANG ; Shimin ZHANG ; Nan XU ; Qianqian HE ; Yuming ZHU ; Zhihui CHANG ; Yanan WU ; Huihan WANG ; Shouliang QI ; Lina ZHANG ; Yu SHI ; Xiujuan QU ; Xin ZHOU ; Jiangdian SONG
Chinese Medical Journal 2025;138(22):2879-2894
With the emergence of deep learning techniques based on convolutional neural networks, artificial intelligence (AI) has driven transformative developments in the field of medical image analysis. Recently, large language models (LLMs) such as ChatGPT have also started to achieve distinction in this domain. Increasing research shows the undeniable role of AI in reshaping various aspects of medical image analysis, including processes such as image enhancement, segmentation, detection in image preprocessing, and postprocessing related to medical diagnosis and prognosis in clinical settings. However, despite the significant progress in AI research, studies investigating the recent advances in AI technology in the aforementioned aspects, the changes in research hotspot trajectories, and the performance of studies in addressing key clinical challenges in this field are limited. This article provides an overview of recent advances in AI for medical image analysis and discusses the methodological profiles, advantages, disadvantages, and future trends of AI technologies.
Artificial Intelligence
;
Humans
;
Image Processing, Computer-Assisted/methods*
;
Neural Networks, Computer
;
Deep Learning
;
Diagnostic Imaging/methods*
4.Three-dimensional human-robot mechanics modeling for dual-arm nursing-care robot transfer based on individualized musculoskeletal multibody dynamics.
Zhiqiang YANG ; Funing HOU ; Qiang LIN ; Jiexin XIE ; Hao LU ; Shijie GUO
Journal of Biomedical Engineering 2025;42(1):96-104
During transfer tasks, the dual-arm nursing-care robot require a human-robot mechanics model to determine the balance region to support the patient safely and stably. Previous studies utilized human-robot two-dimensional static equilibrium models, ignoring the human body volume and muscle torques, which decreased model accuracy and confined the robot ability to adjust the patient's posture in three-dimensional spatial. Therefore, this study proposes a three-dimensional spatial mechanics modeling method based on individualized human musculoskeletal multibody dynamics. Firstly, based on the mechanical features of dual-arm support, this study constructed a foundational three-dimensional human-robot mechanics model including body posture, contact position and body force. With the computed tomography data from subjects, a three-dimensional femur-pelvis-sacrum model was reconstructed, and the individualized musculoskeletal dynamics was analyzed using the ergonomics software, which derived the human joint forces and completed the mechanic model. Then, this study established a dual-arm robot transfer platform to conduct subject transfer experiments, showing that the constructed mechanics model possessed higher accuracy than previous methods. In summary, this study provides a three-dimensional human-robot mechanics model adapting to individual transfers, which has potential application in various scenarios such as nursing-care and rehabilitating robots.
Humans
;
Robotics
;
Biomechanical Phenomena
;
Posture
;
Imaging, Three-Dimensional
;
Nursing Care
5.Methods for enhancing image quality of soft tissue regions in synthetic CT based on cone-beam CT.
Ziwei FU ; Yechen ZHU ; Zijian ZHANG ; Xin GAO
Journal of Biomedical Engineering 2025;42(1):113-122
Synthetic CT (sCT) generated from CBCT has proven effective in artifact reduction and CT number correction, facilitating precise radiation dose calculation. However, the quality of different regions in sCT images is severely imbalanced, with soft tissue region exhibiting notably inferior quality compared to others. To address this imbalance, we proposed a Multi-Task Attention Network (MuTA-Net) based on VGG-16, specifically focusing the enhancement of image quality in soft tissue region of sCT. First, we introduced a multi-task learning strategy that divides the sCT generation task into three sub-tasks: global image generation, soft tissue region generation and bone region segmentation. This approach ensured the quality of overall sCT image while enhancing the network's focus on feature extraction and generation for soft tissues region. The result of bone region segmentation task guided the fusion of sub-tasks results. Then, we designed an attention module to further optimize feature extraction capabilities of the network. Finally, by employing a results fusion module, the results of three sub-tasks were integrated, generating a high-quality sCT image. Experimental results on head and neck CBCT demonstrated that the sCT images generated by the proposed MuTA-Net exhibited a 12.52% reduction in mean absolute error in soft tissue region, compared to the best performance among the three comparative methods, including ResNet, U-Net, and U-Net++. It can be seen that MuTA-Net is suitable for high-quality sCT image generation and has potential application value in the field of CBCT guided adaptive radiation therapy.
Cone-Beam Computed Tomography/methods*
;
Humans
;
Image Processing, Computer-Assisted/methods*
;
Artifacts
;
Algorithms
;
Bone and Bones/diagnostic imaging*
;
Neural Networks, Computer
6.Study on lightweight plasma recognition algorithm based on depth image perception.
Hanwen ZHANG ; Yu SUN ; Hao JIANG ; Jintian HU ; Gangyin LUO ; Dong LI ; Weijuan CAO ; Xiang QIU
Journal of Biomedical Engineering 2025;42(1):123-131
In the clinical stage, suspected hemolytic plasma may cause hemolysis illness, manifesting as symptoms such as heart failure, severe anemia, etc. Applying a deep learning method to plasma images significantly improves recognition accuracy, so that this paper proposes a plasma quality detection model based on improved "You Only Look Once" 5th version (YOLOv5). Then the model presented in this paper and the evaluation system were introduced into the plasma datasets, and the average accuracy of the final classification reached 98.7%. The results of this paper's experiment were obtained through the combination of several key algorithm modules including omni-dimensional dynamic convolution, pooling with separable kernel attention, residual bi-fusion feature pyramid network, and re-parameterization convolution. The method of this paper obtains the feature information of spatial mapping efficiently, and enhances the average recognition accuracy of plasma quality detection. This paper presents a high-efficiency detection method for plasma images, aiming to provide a practical approach to prevent hemolysis illnesses caused by external factors.
Algorithms
;
Humans
;
Hemolysis
;
Plasma
;
Deep Learning
;
Image Processing, Computer-Assisted/methods*
7.Pancreas segmentation with multi-channel convolution and combined deep supervision.
Yue YANG ; Yongxiong WANG ; Chendong QIN
Journal of Biomedical Engineering 2025;42(1):140-147
Due to its irregular shape and varying contour, pancreas segmentation is a recognized challenge in medical image segmentation. Convolutional neural network (CNN) and Transformer-based networks perform well but have limitations: CNN have constrained receptive fields, and Transformer underutilize image features. This work proposes an improved pancreas segmentation method by combining CNN and Transformer. Point-wise separable convolution was introduced in a stage-wise encoder to extract more features with fewer parameters. A densely connected ensemble decoder enabled multi-scale feature fusion, addressing the structural constraints of skip connections. Consistency terms and contrastive loss were integrated into deep supervision to ensure model accuracy. Extensive experiments on the Changhai and National Institute of Health (NIH) pancreas datasets achieved the highest Dice similarity coefficient (DSC) values of 76.32% and 86.78%, with superiority in other metrics. Ablation studies validated each component's contributions to performance and parameter reduction. Results demonstrate that the proposed loss function smooths training and optimizes performance. Overall, the method outperforms other advanced methods, enhances pancreas segmentation performance, supports physician diagnosis, and provides a reliable reference for future research.
Humans
;
Neural Networks, Computer
;
Pancreas/diagnostic imaging*
;
Image Processing, Computer-Assisted/methods*
;
Algorithms
;
Deep Learning
8.A joint distillation model for the tumor segmentation using breast ultrasound images.
Hongjiang GUO ; Youyou DING ; Hao DANG ; Tongtong LIU ; Xuekun SONG ; Ge ZHANG ; Shuo YAO ; Daisen HOU ; Zongwang LYU
Journal of Biomedical Engineering 2025;42(1):148-155
The accurate segmentation of breast ultrasound images is an important precondition for the lesion determination. The existing segmentation approaches embrace massive parameters, sluggish inference speed, and huge memory consumption. To tackle this problem, we propose T 2KD Attention U-Net (dual-Teacher Knowledge Distillation Attention U-Net), a lightweight semantic segmentation method combined double-path joint distillation in breast ultrasound images. Primarily, we designed two teacher models to learn the fine-grained features from each class of images according to different feature representation and semantic information of benign and malignant breast lesions. Then we leveraged the joint distillation to train a lightweight student model. Finally, we constructed a novel weight balance loss to focus on the semantic feature of small objection, solving the unbalance problem of tumor and background. Specifically, the extensive experiments conducted on Dataset BUSI and Dataset B demonstrated that the T 2KD Attention U-Net outperformed various knowledge distillation counterparts. Concretely, the accuracy, recall, precision, Dice, and mIoU of proposed method were 95.26%, 86.23%, 85.09%, 83.59%and 77.78% on Dataset BUSI, respectively. And these performance indexes were 97.95%, 92.80%, 88.33%, 88.40% and 82.42% on Dataset B, respectively. Compared with other models, the performance of this model was significantly improved. Meanwhile, compared with the teacher model, the number, size, and complexity of student model were significantly reduced (2.2×10 6 vs. 106.1×10 6, 8.4 MB vs. 414 MB, 16.59 GFLOPs vs. 205.98 GFLOPs, respectively). Indeedy, the proposed model guarantees the performances while greatly decreasing the amount of computation, which provides a new method for the deployment of clinical medical scenarios.
Humans
;
Breast Neoplasms/diagnostic imaging*
;
Female
;
Ultrasonography, Mammary/methods*
;
Image Processing, Computer-Assisted/methods*
;
Algorithms
;
Neural Networks, Computer
;
Breast/diagnostic imaging*
9.Cross-modal hash retrieval of medical images based on Transformer semantic alignment.
Qianlin WU ; Lun TANG ; Qinghai LIU ; Liming XU ; Qianbin CHEN
Journal of Biomedical Engineering 2025;42(1):156-163
Medical cross-modal retrieval aims to achieve semantic similarity search between different modalities of medical cases, such as quickly locating relevant ultrasound images through ultrasound reports, or using ultrasound images to retrieve matching reports. However, existing medical cross-modal hash retrieval methods face significant challenges, including semantic and visual differences between modalities and the scalability issues of hash algorithms in handling large-scale data. To address these challenges, this paper proposes a Medical image Semantic Alignment Cross-modal Hashing based on Transformer (MSACH). The algorithm employed a segmented training strategy, combining modality feature extraction and hash function learning, effectively extracting low-dimensional features containing important semantic information. A Transformer encoder was used for cross-modal semantic learning. By introducing manifold similarity constraints, balance constraints, and a linear classification network constraint, the algorithm enhanced the discriminability of the hash codes. Experimental results demonstrated that the MSACH algorithm improved the mean average precision (MAP) by 11.8% and 12.8% on two datasets compared to traditional methods. The algorithm exhibits outstanding performance in enhancing retrieval accuracy and handling large-scale medical data, showing promising potential for practical applications.
Algorithms
;
Semantics
;
Humans
;
Ultrasonography
;
Information Storage and Retrieval/methods*
;
Image Processing, Computer-Assisted/methods*
10.Spherical measurement-based analysis of gradient nonlinearity in magnetic resonance imaging.
Xiaoli YANG ; Zhaolian WANG ; Qian WANG ; Yiting ZHANG ; Zixuan SONG ; Yuchang ZHANG ; Yafei QI ; Xiaopeng MA
Journal of Biomedical Engineering 2025;42(1):174-180
The gradient field, one of the core magnetic fields in magnetic resonance imaging (MRI) systems, is generated by gradient coils and plays a critical role in spatial encoding and the generation of echo signals. The uniformity or linearity of the gradient field directly impacts the quality and distortion level of MRI images. However, traditional point measurement methods lack accuracy in assessing the linearity of gradient fields, making it difficult to provide effective parameters for image distortion correction. This paper introduced a spherical measurement-based method that involved measuring the magnetic field distribution on a sphere, followed by detailed magnetic field calculations and linearity analysis. This study, applied to assess the nonlinearity of asymmetric head gradient coils, demonstrated more comprehensive and precise results compared to point measurement methods. This advancement not only strengthens the scientific basis for the design of gradient coils but also provides more reliable parameters and methods for the accurate correction of MRI image distortions.
Magnetic Resonance Imaging/instrumentation*
;
Humans
;
Image Processing, Computer-Assisted/methods*
;
Nonlinear Dynamics
;
Magnetic Fields
;
Algorithms
;
Phantoms, Imaging

Result Analysis
Print
Save
E-mail