1.Chest computed tomography-based artificial intelligence-aided latent class analysis for diagnosis of severe pneumonia.
Caiting CHU ; Yiran GUO ; Zhenghai LU ; Ting GUI ; Shuhui ZHAO ; Xuee CUI ; Siwei LU ; Meijiao JIANG ; Wenhua LI ; Chengjin GAO
Chinese Medical Journal 2025;138(18):2316-2323
BACKGROUND:
There is little literature describing the artificial intelligence (AI)-aided diagnosis of severe pneumonia (SP) subphenotypes and the association of the subphenotypes with the ventilatory treatment efficacy. The aim of our study is to illustrate whether clinical and biological heterogeneity, such as ventilation and gas-exchange, exists among patients with SP using chest computed tomography (CT)-based AI-aided latent class analysis (LCA).
METHODS:
This retrospective study included 413 patients hospitalized at Xinhua Hospital diagnosed with SP from June 1, 2015 to May 30, 2020. AI quantification results of chest CT and their combination with additional clinical variables were used to develop LCA models in an SP population. The optimal subphenotypes were determined though evaluating statistical indicators of all the LCA models, and clinical implications of them such as guiding ventilation strategies were further explored by statistical methods.
RESULTS:
The two-class LCA model based on AI quantification results of chest CT can describe the biological characteristics of the SP population well and hence yielded the two clinical subphenotypes. Patients with subphenotype-1 had milder infections ( P <0.001) than patients with subphenotype-2 and had lower 30-day ( P <0.001) and 90-day ( P <0.001) mortality, and lower in-hospital ( P = 0.001) and 2-year ( P <0.001) mortality. Patients with subphenotype-1 showed a better match between the percentage of non-infected lung volume (used to quantify ventilation) and oxygen saturation (used to reflect gas exchange), compared with patients with subphenotype-2. There were significant differences in the matching degree of lung ventilation and gas exchange between the two subphenotypes ( P <0.001). Compared with patients with subphenotype-2, those with subphenotype-1 showed a relatively better match between CT-based AI metrics of the non-infected region and oxygenation, and their clinical outcomes were effectively improved after receiving invasive ventilation treatment.
CONCLUSIONS
A two-class LCA model based on AI quantification results of chest CT in the SP population particularly revealed clinical heterogeneity of lung function. Identifying the degree of match between ventilation and gas-exchange may help guide decisions about assisted ventilation.
Humans
;
Tomography, X-Ray Computed/methods*
;
Male
;
Female
;
Retrospective Studies
;
Middle Aged
;
Artificial Intelligence
;
Aged
;
Pneumonia/diagnosis*
;
Latent Class Analysis
;
Adult
2.Post-resuscitation care of patients with return of spontaneous circulation after out-of-hospital cardiac arrest at the emergency department.
Jing Kai Jackie LAM ; Jen Heng PEK
Singapore medical journal 2025;66(2):66-72
INTRODUCTION:
Out-of-hospital-cardiac-arrest (OHCA) is a major public health challenge and post-return-of-spontaneous-circulation (ROSC) goals have shifted from just survival to survival with intact neurology. Although post-ROSC care is crucial for survival with intact neurology, there are insufficient well-established protocols for post-resuscitation care. We aimed to evaluate post-resuscitation care in the emergency department (ED) of adult (aged ≥16 years) OHCA patients with sustained ROSC and its associated neurologically intact survival.
METHODS:
A retrospective review of electronic medical records was conducted for OHCA patients with sustained ROSC at the ED. Data including demographics, pre-hospital resuscitation, ED resuscitation, post-resuscitation care and eventual outcomes were analysed.
RESULTS:
Among 921 OHCA patients, 85 (9.2%) had sustained ROSC at the ED. Nineteen patients (19/85, 22.4%) survived, with 13 (13/85, 15.3%) having intact neurology at discharge. Electrocardiogram and chest X-ray were performed in all OHCA patients, whereas computed tomography (CT) was performed inconsistently, with CT brain being most common (74/85, 87.1%), while CT pulmonary angiogram (6/85, 7.1%), abdomen and pelvis (4/85, 4.7%) and aortogram (2/85, 2.4%) were done infrequently. Only four patients (4.7%) had all five neuroprotective goals of normoxia, normocarbia, normotension, normothermia and normoglycaemia achieved in the ED. The proportion of all five neuroprotective goals being met was significantly higher ( P = 0.01) among those with neurologically intact survival (3/13, 23.1%) than those without (1/72, 1.4%).
CONCLUSION
Post-resuscitation care at the ED showed great variability, indicating gaps between recommended guidelines and clinical practice. Good quality post-resuscitation care, centred around neuroprotection goals, must be initiated promptly to achieve meaningful survival with intact neurology.
Humans
;
Out-of-Hospital Cardiac Arrest/mortality*
;
Retrospective Studies
;
Male
;
Female
;
Middle Aged
;
Emergency Service, Hospital
;
Cardiopulmonary Resuscitation/methods*
;
Return of Spontaneous Circulation
;
Aged
;
Adult
;
Treatment Outcome
;
Electrocardiography
;
Tomography, X-Ray Computed
;
Aged, 80 and over
3.Artificial intelligence in medical imaging: From task-specific models to large-scale foundation models.
Yueyan BIAN ; Jin LI ; Chuyang YE ; Xiuqin JIA ; Qi YANG
Chinese Medical Journal 2025;138(6):651-663
Artificial intelligence (AI), particularly deep learning, has demonstrated remarkable performance in medical imaging across a variety of modalities, including X-ray, computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET), and pathological imaging. However, most existing state-of-the-art AI techniques are task-specific and focus on a limited range of imaging modalities. Compared to these task-specific models, emerging foundation models represent a significant milestone in AI development. These models can learn generalized representations of medical images and apply them to downstream tasks through zero-shot or few-shot fine-tuning. Foundation models have the potential to address the comprehensive and multifactorial challenges encountered in clinical practice. This article reviews the clinical applications of both task-specific and foundation models, highlighting their differences, complementarities, and clinical relevance. We also examine their future research directions and potential challenges. Unlike the replacement relationship seen between deep learning and traditional machine learning, task-specific and foundation models are complementary, despite inherent differences. While foundation models primarily focus on segmentation and classification, task-specific models are integrated into nearly all medical image analyses. However, with further advancements, foundation models could be applied to other clinical scenarios. In conclusion, all indications suggest that task-specific and foundation models, especially the latter, have the potential to drive breakthroughs in medical imaging, from image processing to clinical workflows.
Humans
;
Artificial Intelligence
;
Deep Learning
;
Diagnostic Imaging/methods*
;
Magnetic Resonance Imaging
;
Tomography, X-Ray Computed
;
Positron-Emission Tomography
4.Stroke-p2pHD: Cross-modality generation model of cerebral infarction from CT to DWI images.
Qing WANG ; Xinyao ZHAO ; Xinyue LIU ; Zhimeng ZOU ; Haiwang NAN ; Qiang ZHENG
Journal of Biomedical Engineering 2025;42(2):255-262
Among numerous medical imaging modalities, diffusion weighted imaging (DWI) is extremely sensitive to acute ischemic stroke lesions, especially small infarcts. However, magnetic resonance imaging is time-consuming and expensive, and it is also prone to interference from metal implants. Therefore, the aim of this study is to design a medical image synthesis method based on generative adversarial network, Stroke-p2pHD, for synthesizing DWI images from computed tomography (CT). Stroke-p2pHD consisted of a generator that effectively fused local image features and global context information (Global_to_Local) and a multi-scale discriminator (M 2Dis). Specifically, in the Global_to_Local generator, a fully convolutional Transformer (FCT) and a local attention module (LAM) were integrated to achieve the synthesis of detailed information such as textures and lesions in DWI images. In the M 2Dis discriminator, a multi-scale convolutional network was adopted to perform the discrimination function of the input images. Meanwhile, an optimization balance with the Global_to_Local generator was ensured and the consistency of features in each layer of the M 2Dis discriminator was constrained. In this study, the public Acute Ischemic Stroke Dataset (AISD) and the acute cerebral infarction dataset from Yantaishan Hospital were used to verify the performance of the Stroke-p2pHD model in synthesizing DWI based on CT. Compared with other methods, the Stroke-p2pHD model showed excellent quantitative results (mean-square error = 0.008, peak signal-to-noise ratio = 23.766, structural similarity = 0.743). At the same time, relevant experimental analyses such as computational efficiency verify that the Stroke-p2pHD model has great potential for clinical applications.
Humans
;
Tomography, X-Ray Computed/methods*
;
Diffusion Magnetic Resonance Imaging/methods*
;
Cerebral Infarction/diagnostic imaging*
;
Stroke/diagnostic imaging*
;
Neural Networks, Computer
;
Image Processing, Computer-Assisted/methods*
;
Algorithms
5.A method for determining spatial resolution of phantom based on automatic contour delineation.
Ying LIU ; Minghao SUN ; Haowei ZHANG ; Haikuan LIU
Journal of Biomedical Engineering 2025;42(2):263-271
In this study, we propose an automatic contour outlining method to measure the spatial resolution of homemade automatic tube current modulation (ATCM) phantom by outlining the edge contour of the phantom image, selecting the region of interest (ROI), and measuring the spatial resolution characteristics of computer tomography (CT) phantom image. Specifically, the method obtains a binarized image of the phantom outlined by an automated fast region convolutional neural network (AFRCNN) model, measures the edge spread function (ESF) of the CT phantom with different tube currents and layer thicknesses, and differentiates the ESF to obtain the line spread function (LSF). Finally, the values passing through the zeros are normalized by the Fourier transform to obtain the CT spatial resolution index (RI) for the automatic measurement of the modulation transfer function (MTF). In this study, this algorithm is compared with the algorithm that uses polymethylmethacrylate (PMMA) to measure the MTF of the phantom edges to verify the feasibility of this method, and the results show that the AFRCNN model not only improves the efficiency and accuracy of the phantom contour outlining, but also is able to obtain a more accurate spatial resolution value through automated segmentation. In summary, the algorithm proposed in this study is accurate in spatial resolution measurement of phantom images and has the potential to be widely used in real clinical CT images.
Phantoms, Imaging
;
Tomography, X-Ray Computed/instrumentation*
;
Algorithms
;
Neural Networks, Computer
;
Image Processing, Computer-Assisted/methods*
;
Humans
;
Polymethyl Methacrylate
6.A study on the predictive model of porous hyperelastic properties of human alveolar bone based on computed tomography imaging.
Bin WU ; Mingna LI ; Fan YANG ; Le YUAN ; Yi LU ; Di JIANG ; Yang YI ; Bin YAN
Journal of Biomedical Engineering 2025;42(2):359-365
Alveolar bone reconstruction simulation is an effective means for quantifying orthodontics, but currently, it is not possible to directly obtain human alveolar bone material models for simulation. This study introduces a prediction method for the equivalent shear modulus of three-dimensional random porous materials, integrating the first-order Ogden hyperelastic model to construct a computed tomography (CT) based porous hyperelastic Ogden model (CT-PHO) for human alveolar bone. Model parameters are derived by combining results from micro-CT, nanoindentation experiments, and uniaxial compression tests. Compared to previous predictive models, the CT-PHO model shows a lower root mean square error (RMSE) under all bone density conditions. Simulation results using the CT-PHO model parameters in uniaxial compression experiments demonstrate more accurate prediction of the mechanical behavior of alveolar bone under compression. Further prediction and validation with different individual human alveolar bone samples yield accurate results, confirming the generality of the CT-PHO model. The study suggests that the CT-PHO model proposed in this paper can estimate the material properties of human alveolar bone and may eventually be used for bone reconstruction simulations to guide clinical treatment.
Humans
;
Tomography, X-Ray Computed/methods*
;
Porosity
;
Alveolar Process/physiology*
;
Bone Density
;
Computer Simulation
;
Elasticity
;
X-Ray Microtomography
;
Stress, Mechanical
;
Finite Element Analysis
;
Models, Biological
7.Cross modal translation of magnetic resonance imaging and computed tomography images based on diffusion generative adversarial networks.
Hong SHAO ; Yixuan JING ; Wencheng CUI
Journal of Biomedical Engineering 2025;42(3):575-584
To address the issues of difficulty in preserving anatomical structures, low realism of generated images, and loss of high-frequency image information in medical image cross-modal translation, this paper proposes a medical image cross-modal translation method based on diffusion generative adversarial networks. First, an unsupervised translation module is used to convert magnetic resonance imaging (MRI) into pseudo-computed tomography (CT) images. Subsequently, a nonlinear frequency decomposition module is used to extract high-frequency CT images. Finally, the pseudo-CT image is input into the forward process, while the high-frequency CT image as a conditional input is used to guide the reverse process to generate the final CT image. The proposed model is evaluated on the SynthRAD2023 dataset, which is used for CT image generation for radiotherapy planning. The generated brain CT images achieve a Fréchet Inception Distance (FID) score of 33.159 7, a structure similarity index measure (SSIM) of 89.84%, a peak signal-to-noise ratio (PSNR) of 35.596 5 dB, and a mean squared error (MSE) of 17.873 9. The generated pelvic CT images yield an FID score of 33.951 6, a structural similarity index of 91.30%, a PSNR of 34.870 7 dB, and an MSE of 17.465 8. Experimental results show that the proposed model generates highly realistic CT images while preserving anatomical accuracy as much as possible. The transformed CT images can be effectively used in radiotherapy planning, further enhancing diagnostic efficiency.
Humans
;
Tomography, X-Ray Computed/methods*
;
Magnetic Resonance Imaging/methods*
;
Image Processing, Computer-Assisted/methods*
;
Neural Networks, Computer
;
Brain/diagnostic imaging*
;
Algorithms
;
Radiotherapy Planning, Computer-Assisted
;
Generative Adversarial Networks
8.Advances in multimodal biomedical imaging of small animals.
Zhengyan DENG ; Peng XI ; Juan TANG ; Qiushi REN ; Yuanjun YU
Journal of Biomedical Engineering 2025;42(4):841-846
Small animal multimodal biomedical imaging refers to the integration of multiple imaging techniques within the same system or device to acquire comprehensive physiological and pathological information of small animals, such as mice and rats. With the continuous advancement of biomedical research, this cutting-edge technology has attracted extensive attention. Multimodal imaging techniques, based on diverse imaging principles, overcome the limitations of single-modal imaging through information fusion, significantly enhancing the overall system's sensitivity, temporal/spatial resolution, and quantitative accuracy. In the future, the integration of new materials and artificial intelligence will further boost its sensitivity and resolution. Through interdisciplinary innovation, this technology is expected to become the core technology of personalized medicine and expand its applications to drug development, environmental monitoring, and other fields, thus reshaping the landscape of biomedical research and clinical practice. This review summarized the progress on the application and investigation of multimodal biomedical imaging techniques, and discussed its development in the future.
Animals
;
Multimodal Imaging/trends*
;
Rats
;
Mice
;
Artificial Intelligence
;
Diagnostic Imaging/methods*
;
Magnetic Resonance Imaging
;
Tomography, X-Ray Computed
9.Endometrial cancer lesion region segmentation based on large kernel convolution and combined attention.
Rushu PENG ; Qinghao ZENG ; Bin HE ; Junjie LIU ; Zhang XIAO
Journal of Biomedical Engineering 2025;42(5):928-935
Endometrial cancer (EC) is one of the most common gynecological malignancies, with an increasing incidence rate worldwide. Accurate segmentation of lesion areas in computed tomography (CT) images is a critical step in assisting clinical diagnosis. In this study, we propose a novel deep learning-based segmentation model, termed spatial choice and weight union network (SCWU-Net), which incorporates two newly designed modules: the spatial selection module (SSM) and the combination weight module (CWM). The SSM enhances the model's ability to capture contextual information through deep convolutional blocks, while the CWM, based on joint attention mechanisms, is employed within the skip connections to further boost segmentation performance. By integrating the strengths of both modules into a U-shaped multi-scale architecture, the model achieves precise segmentation of EC lesion regions. Experimental results on a public dataset demonstrate that SCWU-Net achieves a Dice similarity coefficient (DSC) of 82.98%, an intersection over union (IoU) of 78.63%, a precision of 92.36%, and a recall of 84.10%. Its overall performance is significantly outperforming other state-of-the-art models. This study enhances the accuracy of lesion segmentation in EC CT images and holds potential clinical value for the auxiliary diagnosis of endometrial cancer.
Humans
;
Endometrial Neoplasms/diagnostic imaging*
;
Female
;
Tomography, X-Ray Computed/methods*
;
Deep Learning
;
Algorithms
;
Image Processing, Computer-Assisted/methods*
;
Neural Networks, Computer
10.Advances in radiomics for early diagnosis and precision treatment of lung cancer.
Jiayi LI ; Wenxin LUO ; Zhoufeng WANG ; Weimin LI
Journal of Biomedical Engineering 2025;42(5):1062-1068
Lung cancer is a leading cause of cancer-related deaths worldwide, with its high mortality rate primarily attributed to delayed diagnosis. Radiomics, by extracting abundant quantitative features from medical images, offers novel possibilities for early diagnosis and precise treatment of lung cancer. This article reviewed the latest advancements in radiomics for lung cancer management, particularly its integration with artificial intelligence (AI) to optimize diagnostic processes and personalize treatment strategies. Despite existing challenges, such as non-standardized image acquisition parameters and limitations in model reproducibility, the incorporation of AI significantly enhanced the precision and efficiency of image analysis, thereby improving the prediction of disease progression and the formulation of treatment plans. We emphasized the critical importance of standardizing image acquisition parameters and discussed the role of AI in advancing the clinical application of radiomics, alongside future research directions.
Humans
;
Lung Neoplasms/diagnosis*
;
Artificial Intelligence
;
Early Detection of Cancer/methods*
;
Precision Medicine
;
Image Processing, Computer-Assisted/methods*
;
Tomography, X-Ray Computed
;
Radiomics

Result Analysis
Print
Save
E-mail