1.Advancements in Research on Preoperative Localization of Pulmonary Nodules.
Jialong CHEN ; Lei ZHOU ; Lingling QIN ; Chunlai LIU
Chinese Journal of Lung Cancer 2025;28(5):385-390
In recent years, the widespread application of chest computed tomography (CT) screening has led to a significant increase in the detection rate of pulmonary nodules. As a critical diagnostic tool for early-stage lung cancer, video-assisted thoracic surgery (VATS) has emerged as the preferred therapeutic approach for pulmonary nodules. Clinical evidence demonstrates that precise preoperative localization significantly enhances surgical success rates (reducing conversion to thoracotomy), minimizes complications, and shortens operation time. This comprehensive review systematically evaluates six cutting-edge localization techniques: percutaneous puncture-assisted localization, electromagnetic navigation bronchoscopy (ENB) localization, 3D-printed auxiliary localization, basin-analysis-based localization, robotic navigation system localization, and mixed reality (MR)-guided localization. By critically analyzing their operational principles, efficacy, safety profiles, and clinical applicability, this paper aims to provide evidence-based recommendations for optimizing clinical decision-making in pulmonary nodule management.
.
Humans
;
Lung Neoplasms/diagnosis*
;
Solitary Pulmonary Nodule/diagnostic imaging*
;
Thoracic Surgery, Video-Assisted/methods*
;
Multiple Pulmonary Nodules/diagnostic imaging*
;
Tomography, X-Ray Computed
2.Predicting Invasive Non-mucinous Lung Adenocarcinoma IASLC Grading: A Nomogram Based on Dual-energy CT Imaging and Conventional Features.
Kaibo ZHU ; Liangna DENG ; Yue HOU ; Lulu XIONG ; Caixia ZHU ; Haisheng WANG ; Junlin ZHOU
Chinese Journal of Lung Cancer 2025;28(8):585-596
BACKGROUND:
Lung adenocarcinoma is an important pathohistologic subtype of non-small cell lung cancer (NSCLC). Invasive non-mucinous pulmonary adenocarcinomas (INMA) tend to have a poor prognosis due to their significant heterogeneity and diverse histologic components. Establishing a histologic grading system for INMA is crucial for evaluating its malignancy. In 2021, the International Association for the Study of Lung Cancer (IASLC) proposed that a new histological grading system could better stratify the prognosis of INMA patients. The aim of this study was to establish a visualized nomogram model to predict INMA IASLC grading preoperatively by means of dual-energy computed tomography (DECT), fractal dimension (FD), clinical features and conventional CT parameters.
METHODS:
A total of 112 patients with INMA who underwent preoperative DECT were retrospectively enrolled from March 2021 to January 2025. Patients were categorized into low-intermediate grade and high grade groups based on IASLC grading. The clinical characteristics and conventional CT parameters, including baseline features, biochemical markers, and serum tumor markers, were collected. DECT-derived parameters, including iodine concentration (IC), effective atomic number (eff-Z), and normalized IC (NIC), were collected and determined as NIC ratio (NICr) and fractal dimension (FD). Univariate analysis was employed to compare differences in conventional characteristics and DECT parameters between the two groups. Variables demonstrating statistical significance were subsequently incorporated into a multivariate Logistic regression analysis. A nomogram model integrating clinical data, conventional CT parameters, and DECT parameters was developed to identify independent predictors for IASLC grading of INMA. The discriminatory performance of the model was evaluated using receiver operating characteristic (ROC) curve analysis.
RESULTS:
Multivariate analysis identified smoking history [odds ratio (OR)=2.848, P=0.041], lobulation sign (OR=2.163, P=0.004), air bronchogram (OR=7.833, P=0.005), eff-Z in arterial phase (OR=4.266, P<0.001), and IC in arterial phase (OR=1.290, P=0.012) as independent and significant predictors for IASLC grading of INMA. The nomogram model constructed based on these indicators demonstrated optimal predictive performance, achieving an area under the curve (AUC) of 0.804 (95%CI: 0.725-0.883), with specificity and sensitivity of 85.3% and 65.7%, respectively.
CONCLUSIONS
The nomogram model based on clinical features, imaging features and spectral CT parameters have a large potential for application in the preoperative noninvasive assessment of INMA IASLC grading.
Humans
;
Nomograms
;
Female
;
Male
;
Middle Aged
;
Tomography, X-Ray Computed/methods*
;
Lung Neoplasms/pathology*
;
Aged
;
Retrospective Studies
;
Adenocarcinoma of Lung/pathology*
;
Neoplasm Grading
;
Adult
3.Post-resuscitation care of patients with return of spontaneous circulation after out-of-hospital cardiac arrest at the emergency department.
Jing Kai Jackie LAM ; Jen Heng PEK
Singapore medical journal 2025;66(2):66-72
INTRODUCTION:
Out-of-hospital-cardiac-arrest (OHCA) is a major public health challenge and post-return-of-spontaneous-circulation (ROSC) goals have shifted from just survival to survival with intact neurology. Although post-ROSC care is crucial for survival with intact neurology, there are insufficient well-established protocols for post-resuscitation care. We aimed to evaluate post-resuscitation care in the emergency department (ED) of adult (aged ≥16 years) OHCA patients with sustained ROSC and its associated neurologically intact survival.
METHODS:
A retrospective review of electronic medical records was conducted for OHCA patients with sustained ROSC at the ED. Data including demographics, pre-hospital resuscitation, ED resuscitation, post-resuscitation care and eventual outcomes were analysed.
RESULTS:
Among 921 OHCA patients, 85 (9.2%) had sustained ROSC at the ED. Nineteen patients (19/85, 22.4%) survived, with 13 (13/85, 15.3%) having intact neurology at discharge. Electrocardiogram and chest X-ray were performed in all OHCA patients, whereas computed tomography (CT) was performed inconsistently, with CT brain being most common (74/85, 87.1%), while CT pulmonary angiogram (6/85, 7.1%), abdomen and pelvis (4/85, 4.7%) and aortogram (2/85, 2.4%) were done infrequently. Only four patients (4.7%) had all five neuroprotective goals of normoxia, normocarbia, normotension, normothermia and normoglycaemia achieved in the ED. The proportion of all five neuroprotective goals being met was significantly higher ( P = 0.01) among those with neurologically intact survival (3/13, 23.1%) than those without (1/72, 1.4%).
CONCLUSION
Post-resuscitation care at the ED showed great variability, indicating gaps between recommended guidelines and clinical practice. Good quality post-resuscitation care, centred around neuroprotection goals, must be initiated promptly to achieve meaningful survival with intact neurology.
Humans
;
Out-of-Hospital Cardiac Arrest/mortality*
;
Retrospective Studies
;
Male
;
Female
;
Middle Aged
;
Emergency Service, Hospital
;
Cardiopulmonary Resuscitation/methods*
;
Return of Spontaneous Circulation
;
Aged
;
Adult
;
Treatment Outcome
;
Electrocardiography
;
Tomography, X-Ray Computed
;
Aged, 80 and over
4.Chest computed tomography-based artificial intelligence-aided latent class analysis for diagnosis of severe pneumonia.
Caiting CHU ; Yiran GUO ; Zhenghai LU ; Ting GUI ; Shuhui ZHAO ; Xuee CUI ; Siwei LU ; Meijiao JIANG ; Wenhua LI ; Chengjin GAO
Chinese Medical Journal 2025;138(18):2316-2323
BACKGROUND:
There is little literature describing the artificial intelligence (AI)-aided diagnosis of severe pneumonia (SP) subphenotypes and the association of the subphenotypes with the ventilatory treatment efficacy. The aim of our study is to illustrate whether clinical and biological heterogeneity, such as ventilation and gas-exchange, exists among patients with SP using chest computed tomography (CT)-based AI-aided latent class analysis (LCA).
METHODS:
This retrospective study included 413 patients hospitalized at Xinhua Hospital diagnosed with SP from June 1, 2015 to May 30, 2020. AI quantification results of chest CT and their combination with additional clinical variables were used to develop LCA models in an SP population. The optimal subphenotypes were determined though evaluating statistical indicators of all the LCA models, and clinical implications of them such as guiding ventilation strategies were further explored by statistical methods.
RESULTS:
The two-class LCA model based on AI quantification results of chest CT can describe the biological characteristics of the SP population well and hence yielded the two clinical subphenotypes. Patients with subphenotype-1 had milder infections ( P <0.001) than patients with subphenotype-2 and had lower 30-day ( P <0.001) and 90-day ( P <0.001) mortality, and lower in-hospital ( P = 0.001) and 2-year ( P <0.001) mortality. Patients with subphenotype-1 showed a better match between the percentage of non-infected lung volume (used to quantify ventilation) and oxygen saturation (used to reflect gas exchange), compared with patients with subphenotype-2. There were significant differences in the matching degree of lung ventilation and gas exchange between the two subphenotypes ( P <0.001). Compared with patients with subphenotype-2, those with subphenotype-1 showed a relatively better match between CT-based AI metrics of the non-infected region and oxygenation, and their clinical outcomes were effectively improved after receiving invasive ventilation treatment.
CONCLUSIONS
A two-class LCA model based on AI quantification results of chest CT in the SP population particularly revealed clinical heterogeneity of lung function. Identifying the degree of match between ventilation and gas-exchange may help guide decisions about assisted ventilation.
Humans
;
Tomography, X-Ray Computed/methods*
;
Male
;
Female
;
Retrospective Studies
;
Middle Aged
;
Artificial Intelligence
;
Aged
;
Pneumonia/diagnosis*
;
Latent Class Analysis
;
Adult
5.Artificial intelligence in medical imaging: From task-specific models to large-scale foundation models.
Yueyan BIAN ; Jin LI ; Chuyang YE ; Xiuqin JIA ; Qi YANG
Chinese Medical Journal 2025;138(6):651-663
Artificial intelligence (AI), particularly deep learning, has demonstrated remarkable performance in medical imaging across a variety of modalities, including X-ray, computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET), and pathological imaging. However, most existing state-of-the-art AI techniques are task-specific and focus on a limited range of imaging modalities. Compared to these task-specific models, emerging foundation models represent a significant milestone in AI development. These models can learn generalized representations of medical images and apply them to downstream tasks through zero-shot or few-shot fine-tuning. Foundation models have the potential to address the comprehensive and multifactorial challenges encountered in clinical practice. This article reviews the clinical applications of both task-specific and foundation models, highlighting their differences, complementarities, and clinical relevance. We also examine their future research directions and potential challenges. Unlike the replacement relationship seen between deep learning and traditional machine learning, task-specific and foundation models are complementary, despite inherent differences. While foundation models primarily focus on segmentation and classification, task-specific models are integrated into nearly all medical image analyses. However, with further advancements, foundation models could be applied to other clinical scenarios. In conclusion, all indications suggest that task-specific and foundation models, especially the latter, have the potential to drive breakthroughs in medical imaging, from image processing to clinical workflows.
Humans
;
Artificial Intelligence
;
Deep Learning
;
Diagnostic Imaging/methods*
;
Magnetic Resonance Imaging
;
Tomography, X-Ray Computed
;
Positron-Emission Tomography
6.Stroke-p2pHD: Cross-modality generation model of cerebral infarction from CT to DWI images.
Qing WANG ; Xinyao ZHAO ; Xinyue LIU ; Zhimeng ZOU ; Haiwang NAN ; Qiang ZHENG
Journal of Biomedical Engineering 2025;42(2):255-262
Among numerous medical imaging modalities, diffusion weighted imaging (DWI) is extremely sensitive to acute ischemic stroke lesions, especially small infarcts. However, magnetic resonance imaging is time-consuming and expensive, and it is also prone to interference from metal implants. Therefore, the aim of this study is to design a medical image synthesis method based on generative adversarial network, Stroke-p2pHD, for synthesizing DWI images from computed tomography (CT). Stroke-p2pHD consisted of a generator that effectively fused local image features and global context information (Global_to_Local) and a multi-scale discriminator (M 2Dis). Specifically, in the Global_to_Local generator, a fully convolutional Transformer (FCT) and a local attention module (LAM) were integrated to achieve the synthesis of detailed information such as textures and lesions in DWI images. In the M 2Dis discriminator, a multi-scale convolutional network was adopted to perform the discrimination function of the input images. Meanwhile, an optimization balance with the Global_to_Local generator was ensured and the consistency of features in each layer of the M 2Dis discriminator was constrained. In this study, the public Acute Ischemic Stroke Dataset (AISD) and the acute cerebral infarction dataset from Yantaishan Hospital were used to verify the performance of the Stroke-p2pHD model in synthesizing DWI based on CT. Compared with other methods, the Stroke-p2pHD model showed excellent quantitative results (mean-square error = 0.008, peak signal-to-noise ratio = 23.766, structural similarity = 0.743). At the same time, relevant experimental analyses such as computational efficiency verify that the Stroke-p2pHD model has great potential for clinical applications.
Humans
;
Tomography, X-Ray Computed/methods*
;
Diffusion Magnetic Resonance Imaging/methods*
;
Cerebral Infarction/diagnostic imaging*
;
Stroke/diagnostic imaging*
;
Neural Networks, Computer
;
Image Processing, Computer-Assisted/methods*
;
Algorithms
7.A method for determining spatial resolution of phantom based on automatic contour delineation.
Ying LIU ; Minghao SUN ; Haowei ZHANG ; Haikuan LIU
Journal of Biomedical Engineering 2025;42(2):263-271
In this study, we propose an automatic contour outlining method to measure the spatial resolution of homemade automatic tube current modulation (ATCM) phantom by outlining the edge contour of the phantom image, selecting the region of interest (ROI), and measuring the spatial resolution characteristics of computer tomography (CT) phantom image. Specifically, the method obtains a binarized image of the phantom outlined by an automated fast region convolutional neural network (AFRCNN) model, measures the edge spread function (ESF) of the CT phantom with different tube currents and layer thicknesses, and differentiates the ESF to obtain the line spread function (LSF). Finally, the values passing through the zeros are normalized by the Fourier transform to obtain the CT spatial resolution index (RI) for the automatic measurement of the modulation transfer function (MTF). In this study, this algorithm is compared with the algorithm that uses polymethylmethacrylate (PMMA) to measure the MTF of the phantom edges to verify the feasibility of this method, and the results show that the AFRCNN model not only improves the efficiency and accuracy of the phantom contour outlining, but also is able to obtain a more accurate spatial resolution value through automated segmentation. In summary, the algorithm proposed in this study is accurate in spatial resolution measurement of phantom images and has the potential to be widely used in real clinical CT images.
Phantoms, Imaging
;
Tomography, X-Ray Computed/instrumentation*
;
Algorithms
;
Neural Networks, Computer
;
Image Processing, Computer-Assisted/methods*
;
Humans
;
Polymethyl Methacrylate
8.A study on the predictive model of porous hyperelastic properties of human alveolar bone based on computed tomography imaging.
Bin WU ; Mingna LI ; Fan YANG ; Le YUAN ; Yi LU ; Di JIANG ; Yang YI ; Bin YAN
Journal of Biomedical Engineering 2025;42(2):359-365
Alveolar bone reconstruction simulation is an effective means for quantifying orthodontics, but currently, it is not possible to directly obtain human alveolar bone material models for simulation. This study introduces a prediction method for the equivalent shear modulus of three-dimensional random porous materials, integrating the first-order Ogden hyperelastic model to construct a computed tomography (CT) based porous hyperelastic Ogden model (CT-PHO) for human alveolar bone. Model parameters are derived by combining results from micro-CT, nanoindentation experiments, and uniaxial compression tests. Compared to previous predictive models, the CT-PHO model shows a lower root mean square error (RMSE) under all bone density conditions. Simulation results using the CT-PHO model parameters in uniaxial compression experiments demonstrate more accurate prediction of the mechanical behavior of alveolar bone under compression. Further prediction and validation with different individual human alveolar bone samples yield accurate results, confirming the generality of the CT-PHO model. The study suggests that the CT-PHO model proposed in this paper can estimate the material properties of human alveolar bone and may eventually be used for bone reconstruction simulations to guide clinical treatment.
Humans
;
Tomography, X-Ray Computed/methods*
;
Porosity
;
Alveolar Process/physiology*
;
Bone Density
;
Computer Simulation
;
Elasticity
;
X-Ray Microtomography
;
Stress, Mechanical
;
Finite Element Analysis
;
Models, Biological
9.Cross modal translation of magnetic resonance imaging and computed tomography images based on diffusion generative adversarial networks.
Hong SHAO ; Yixuan JING ; Wencheng CUI
Journal of Biomedical Engineering 2025;42(3):575-584
To address the issues of difficulty in preserving anatomical structures, low realism of generated images, and loss of high-frequency image information in medical image cross-modal translation, this paper proposes a medical image cross-modal translation method based on diffusion generative adversarial networks. First, an unsupervised translation module is used to convert magnetic resonance imaging (MRI) into pseudo-computed tomography (CT) images. Subsequently, a nonlinear frequency decomposition module is used to extract high-frequency CT images. Finally, the pseudo-CT image is input into the forward process, while the high-frequency CT image as a conditional input is used to guide the reverse process to generate the final CT image. The proposed model is evaluated on the SynthRAD2023 dataset, which is used for CT image generation for radiotherapy planning. The generated brain CT images achieve a Fréchet Inception Distance (FID) score of 33.159 7, a structure similarity index measure (SSIM) of 89.84%, a peak signal-to-noise ratio (PSNR) of 35.596 5 dB, and a mean squared error (MSE) of 17.873 9. The generated pelvic CT images yield an FID score of 33.951 6, a structural similarity index of 91.30%, a PSNR of 34.870 7 dB, and an MSE of 17.465 8. Experimental results show that the proposed model generates highly realistic CT images while preserving anatomical accuracy as much as possible. The transformed CT images can be effectively used in radiotherapy planning, further enhancing diagnostic efficiency.
Humans
;
Tomography, X-Ray Computed/methods*
;
Magnetic Resonance Imaging/methods*
;
Image Processing, Computer-Assisted/methods*
;
Neural Networks, Computer
;
Brain/diagnostic imaging*
;
Algorithms
;
Radiotherapy Planning, Computer-Assisted
;
Generative Adversarial Networks
10.Advances in multimodal biomedical imaging of small animals.
Zhengyan DENG ; Peng XI ; Juan TANG ; Qiushi REN ; Yuanjun YU
Journal of Biomedical Engineering 2025;42(4):841-846
Small animal multimodal biomedical imaging refers to the integration of multiple imaging techniques within the same system or device to acquire comprehensive physiological and pathological information of small animals, such as mice and rats. With the continuous advancement of biomedical research, this cutting-edge technology has attracted extensive attention. Multimodal imaging techniques, based on diverse imaging principles, overcome the limitations of single-modal imaging through information fusion, significantly enhancing the overall system's sensitivity, temporal/spatial resolution, and quantitative accuracy. In the future, the integration of new materials and artificial intelligence will further boost its sensitivity and resolution. Through interdisciplinary innovation, this technology is expected to become the core technology of personalized medicine and expand its applications to drug development, environmental monitoring, and other fields, thus reshaping the landscape of biomedical research and clinical practice. This review summarized the progress on the application and investigation of multimodal biomedical imaging techniques, and discussed its development in the future.
Animals
;
Multimodal Imaging/trends*
;
Rats
;
Mice
;
Artificial Intelligence
;
Diagnostic Imaging/methods*
;
Magnetic Resonance Imaging
;
Tomography, X-Ray Computed

Result Analysis
Print
Save
E-mail