1.Noninvasive Diagnostic Technique for Nonalcoholic Fatty Liver Disease Based on Features of Tongue Images.
Rong-Rui WANG ; Jia-Liang CHEN ; Shao-Jie DUAN ; Ying-Xi LU ; Ping CHEN ; Yuan-Chen ZHOU ; Shu-Kun YAO
Chinese journal of integrative medicine 2024;30(3):203-212
OBJECTIVE:
To investigate a new noninvasive diagnostic model for nonalcoholic fatty liver disease (NAFLD) based on features of tongue images.
METHODS:
Healthy controls and volunteers confirmed to have NAFLD by liver ultrasound were recruited from China-Japan Friendship Hospital between September 2018 and May 2019, then the anthropometric indexes and sampled tongue images were measured. The tongue images were labeled by features, based on a brief protocol, without knowing any other clinical data, after a series of corrections and data cleaning. The algorithm was trained on images using labels and several anthropometric indexes for inputs, utilizing machine learning technology. Finally, a logistic regression algorithm and a decision tree model were constructed as 2 diagnostic models for NAFLD.
RESULTS:
A total of 720 subjects were enrolled in this study, including 432 patients with NAFLD and 288 healthy volunteers. Of them, 482 were randomly allocated into the training set and 238 into the validation set. The diagnostic model based on logistic regression exhibited excellent performance: in validation set, it achieved an accuracy of 86.98%, sensitivity of 91.43%, and specificity of 80.61%; with an area under the curve (AUC) of 0.93 [95% confidence interval (CI) 0.68-0.98]. The decision tree model achieved an accuracy of 81.09%, sensitivity of 91.43%, and specificity of 66.33%; with an AUC of 0.89 (95% CI 0.66-0.92) in validation set.
CONCLUSIONS
The features of tongue images were associated with NAFLD. Both the 2 diagnostic models, which would be convenient, noninvasive, lightweight, rapid, and inexpensive technical references for early screening, can accurately distinguish NAFLD and are worth further study.
Humans
;
Non-alcoholic Fatty Liver Disease/diagnostic imaging*
;
Ultrasonography
;
Anthropometry
;
Algorithms
;
China
3.Rapid identification of chronic kidney disease in electronic health record database using computable phenotype combining a common data model.
Huai-Yu WANG ; Jian DU ; Yu YANG ; Hongbo LIN ; Beiyan BAO ; Guohui DING ; Chao YANG ; Guilan KONG ; Luxia ZHANG
Chinese Medical Journal 2023;136(7):874-876
5.Machine and deep learning-based clinical characteristics and laboratory markers for the prediction of sarcopenia.
He ZHANG ; Mengting YIN ; Qianhui LIU ; Fei DING ; Lisha HOU ; Yiping DENG ; Tao CUI ; Yixian HAN ; Weiguang PANG ; Wenbin YE ; Jirong YUE ; Yong HE
Chinese Medical Journal 2023;136(8):967-973
BACKGROUND:
Sarcopenia is an age-related progressive skeletal muscle disorder involving the loss of muscle mass or strength and physiological function. Efficient and precise AI algorithms may play a significant role in the diagnosis of sarcopenia. In this study, we aimed to develop a machine learning model for sarcopenia diagnosis using clinical characteristics and laboratory indicators of aging cohorts.
METHODS:
We developed models of sarcopenia using the baseline data from the West China Health and Aging Trend (WCHAT) study. For external validation, we used the Xiamen Aging Trend (XMAT) cohort. We compared the support vector machine (SVM), random forest (RF), eXtreme Gradient Boosting (XGB), and Wide and Deep (W&D) models. The area under the receiver operating curve (AUC) and accuracy (ACC) were used to evaluate the diagnostic efficiency of the models.
RESULTS:
The WCHAT cohort, which included a total of 4057 participants for the training and testing datasets, and the XMAT cohort, which consisted of 553 participants for the external validation dataset, were enrolled in this study. Among the four models, W&D had the best performance (AUC = 0.916 ± 0.006, ACC = 0.882 ± 0.006), followed by SVM (AUC =0.907 ± 0.004, ACC = 0.877 ± 0.006), XGB (AUC = 0.877 ± 0.005, ACC = 0.868 ± 0.005), and RF (AUC = 0.843 ± 0.031, ACC = 0.836 ± 0.024) in the training dataset. Meanwhile, in the testing dataset, the diagnostic efficiency of the models from large to small was W&D (AUC = 0.881, ACC = 0.862), XGB (AUC = 0.858, ACC = 0.861), RF (AUC = 0.843, ACC = 0.836), and SVM (AUC = 0.829, ACC = 0.857). In the external validation dataset, the performance of W&D (AUC = 0.970, ACC = 0.911) was the best among the four models, followed by RF (AUC = 0.830, ACC = 0.769), SVM (AUC = 0.766, ACC = 0.738), and XGB (AUC = 0.722, ACC = 0.749).
CONCLUSIONS:
The W&D model not only had excellent diagnostic performance for sarcopenia but also showed good economic efficiency and timeliness. It could be widely used in primary health care institutions or developing areas with an aging population.
TRIAL REGISTRATION
Chictr.org, ChiCTR 1800018895.
Humans
;
Aged
;
Sarcopenia/diagnosis*
;
Deep Learning
;
Aging
;
Algorithms
;
Biomarkers
6.Quality of Images Reconstructed by Deep Learning Reconstruction Algorithm for Head and Neck CT Angiography at 100 kVp.
Xiao-Ping LU ; Yun WANG ; Yu CHEN ; Yan-Ling WANG ; Min XU ; Zheng-Yu JIN
Acta Academiae Medicinae Sinicae 2023;45(3):416-421
Objective To evaluate the impact of deep learning reconstruction algorithm on the image quality of head and neck CT angiography (CTA) at 100 kVp. Methods CT scanning was performed at 100 kVp for the 37 patients who underwent head and neck CTA in PUMC Hospital from March to April in 2021.Four sets of images were reconstructed by three-dimensional adaptive iterative dose reduction (AIDR 3D) and advanced intelligent Clear-IQ engine (AiCE) (low,medium,and high intensity algorithms),respectively.The average CT value,standard deviation (SD),signal-to-noise ratio (SNR),and contrast-to-noise ratio (CNR) of the region of interest in the transverse section image were calculated.Furthermore,the four sets of sagittal maximum intensity projection images of the anterior cerebral artery were scored (1 point:poor,5 points:excellent). Results The SNR and CNR showed differences in the images reconstructed by AiCE (low,medium,and high intensity) and AIDR 3D (all P<0.01).The quality scores of the image reconstructed by AiCE (low,medium,and high intensity) and AIDR 3D were 4.78±0.41,4.92±0.27,4.97±0.16,and 3.92±0.27,respectively,which showed statistically significant differences (all P<0.001). Conclusion AiCE outperformed AIDR 3D in reconstructing the images of head and neck CTA at 100 kVp,being capable of improving image quality and applicable in clinical examinations.
Humans
;
Computed Tomography Angiography/methods*
;
Radiation Dosage
;
Deep Learning
;
Radiographic Image Interpretation, Computer-Assisted/methods*
;
Signal-To-Noise Ratio
;
Algorithms
7.Characterization and identification of alkaloids in Phellodendri Chinensis Cortex and Phellodendri Amurensis Cortex based on UHPLC-IM-Q-TOF-MS.
Shan-Shan WEN ; Ping LI ; Wen GAO
China Journal of Chinese Materia Medica 2023;48(12):3294-3307
A strategy combining collision cross section(CCS) prediction and quantitative structure-retention relationship(QSRR) model for quinoline and isoquinoline alkaloids was established based on UHPLC-IM-Q-TOF-MS and applied to Phellodendri Chinensis Cortex and Phellodendri Amurensis Cortex. The strategy included the following three steps.(1) The molecular features were extracted by the "find features" algorithm.(2) The potential quinoline and isoquinoline alkaloids were screened by filtering the original characteristic ions extracted from Phellodendri Chinensis Cortex and Phellodendri Amurensis Cortex by the established CCS vs m/z prediction interval.(3) According to the retention time of candidate compounds predicted by QSRR model, the chemical constituents were identified in combination with the characteristic fragment ions and pyrolysis law of secondary mass spectrometry. With the strategy, a total of 80 compounds were predicted, and 15 were identified accurately. The strategy is effective for the identification of small analogs of traditional Chinese medicine.
Chromatography, High Pressure Liquid
;
Algorithms
;
Alkaloids
;
Isoquinolines
;
Quinolines
8.A meta-learning based method for segmentation of few-shot magnetic resonance images.
Xiaoqing CHEN ; Zhongliang FU ; Yu YAO
Journal of Biomedical Engineering 2023;40(2):193-201
When applying deep learning algorithms to magnetic resonance (MR) image segmentation, a large number of annotated images are required as data support. However, the specificity of MR images makes it difficult and costly to acquire large amounts of annotated image data. To reduce the dependence of MR image segmentation on a large amount of annotated data, this paper proposes a meta-learning U-shaped network (Meta-UNet) for few-shot MR image segmentation. Meta-UNet can use a small amount of annotated image data to complete the task of MR image segmentation and obtain good segmentation results. Meta-UNet improves U-Net by introducing dilated convolution, which can increase the receptive field of the model to improve the sensitivity to targets of different scales. We introduce the attention mechanism to improve the adaptability of the model to different scales. We introduce the meta-learning mechanism, and employ a composite loss function for well-supervised and effective bootstrapping of model training. We use the proposed Meta-UNet model to train on different segmentation tasks, and then use the trained model to evaluate on a new segmentation task, where the Meta-UNet model achieves high-precision segmentation of target images. Meta-UNet has a certain improvement in mean Dice similarity coefficient (DSC) compared with voxel morph network (VoxelMorph), data augmentation using learned transformations (DataAug) and label transfer network (LT-Net). Experiments show that the proposed method can effectively perform MR image segmentation using a small number of samples. It provides a reliable aid for clinical diagnosis and treatment.
Algorithms
;
Image Processing, Computer-Assisted
;
Magnetic Resonance Imaging
9.Multiresolution discrete optimization registration method of ultrasound and magnetic resonance images based on key points.
Journal of Biomedical Engineering 2023;40(2):202-207
The registration of preoperative magnetic resonance (MR) images and intraoperative ultrasound (US) images is very important in the planning of brain tumor surgery and during surgery. Considering that the two-modality images have different intensity range and resolution, and the US images are degraded by lots of speckle noises, a self-similarity context (SSC) descriptor based on local neighborhood information was adopted to define the similarity measure. The ultrasound images were considered as the reference, the corners were extracted as the key points using three-dimensional differential operators, and the dense displacement sampling discrete optimization algorithm was adopted for registration. The whole registration process was divided into two stages including the affine registration and the elastic registration. In the affine registration stage, the image was decomposed using multi-resolution scheme, and in the elastic registration stage, the displacement vectors of key points were regularized using the minimum convolution and mean field reasoning strategies. The registration experiment was performed on the preoperative MR images and intraoperative US images of 22 patients. The overall error after affine registration was (1.57 ± 0.30) mm, and the average computation time of each pair of images was only 1.36 s; while the overall error after elastic registration was further reduced to (1.40 ± 0.28) mm, and the average registration time was 1.53 s. The experimental results show that the proposed method has prominent registration accuracy and high computational efficiency.
Humans
;
Imaging, Three-Dimensional/methods*
;
Magnetic Resonance Imaging/methods*
;
Ultrasonography/methods*
;
Algorithms
;
Surgery, Computer-Assisted/methods*
10.CT and MRI fusion based on generative adversarial network and convolutional neural networks under image enhancement.
Yunpeng LIU ; Jin LI ; Yu WANG ; Wenli CAI ; Fei CHEN ; Wenjie LIU ; Xianhao MAO ; Kaifeng GAN ; Renfang WANG ; Dechao SUN ; Hong QIU ; Bangquan LIU
Journal of Biomedical Engineering 2023;40(2):208-216
Aiming at the problems of missing important features, inconspicuous details and unclear textures in the fusion of multimodal medical images, this paper proposes a method of computed tomography (CT) image and magnetic resonance imaging (MRI) image fusion using generative adversarial network (GAN) and convolutional neural network (CNN) under image enhancement. The generator aimed at high-frequency feature images and used double discriminators to target the fusion images after inverse transform; Then high-frequency feature images were fused by trained GAN model, and low-frequency feature images were fused by CNN pre-training model based on transfer learning. Experimental results showed that, compared with the current advanced fusion algorithm, the proposed method had more abundant texture details and clearer contour edge information in subjective representation. In the evaluation of objective indicators, Q AB/F, information entropy (IE), spatial frequency (SF), structural similarity (SSIM), mutual information (MI) and visual information fidelity for fusion (VIFF) were 2.0%, 6.3%, 7.0%, 5.5%, 9.0% and 3.3% higher than the best test results, respectively. The fused image can be effectively applied to medical diagnosis to further improve the diagnostic efficiency.
Image Processing, Computer-Assisted/methods*
;
Neural Networks, Computer
;
Tomography, X-Ray Computed
;
Magnetic Resonance Imaging/methods*
;
Algorithms

Result Analysis
Print
Save
E-mail