1.Oral panorama reconstruction method based on pre-segmentation and Bezier function.
Changpeng HOU ; Fudong ZHU ; Gaohua ZHANG ; Zhen LYU ; Yunfeng LIU ; Weidong ZHU
Journal of Biomedical Engineering 2023;40(5):894-902
For patients with partial jaw defects, cysts and dental implants, doctors need to take panoramic X-ray films or manually draw dental arch lines to generate Panorama images in order to observe their complete dentition information during oral diagnosis. In order to solve the problems of additional burden for patients to take panoramic X-ray films and time-consuming issue for doctors to manually segment dental arch lines, this paper proposes an automatic panorama reconstruction method based on cone beam computerized tomography (CBCT). The V-network (VNet) is used to pre-segment the teeth and the background to generate the corresponding binary image, and then the Bezier curve is used to define the best dental arch curve to generate the oral panorama. In addition, this research also addressed the issues of mistakenly recognizing the teeth and jaws as dental arches, incomplete coverage of the dental arch area by the generated dental arch lines, and low robustness, providing intelligent methods for dental diagnosis and improve the work efficiency of doctors.
Humans
;
Radiography, Panoramic/methods*
;
Cone-Beam Computed Tomography/methods*
;
Head
;
Image Processing, Computer-Assisted/methods*
2.Deep learning method for magnetic resonance imaging fluid-attenuated inversion recovery image synthesis.
Jianing ZHOU ; Hongyu GUO ; Hong CHEN
Journal of Biomedical Engineering 2023;40(5):903-911
Magnetic resonance imaging(MRI) can obtain multi-modal images with different contrast, which provides rich information for clinical diagnosis. However, some contrast images are not scanned or the quality of the acquired images cannot meet the diagnostic requirements due to the difficulty of patient's cooperation or the limitation of scanning conditions. Image synthesis techniques have become a method to compensate for such image deficiencies. In recent years, deep learning has been widely used in the field of MRI synthesis. In this paper, a synthesis network based on multi-modal fusion is proposed, which firstly uses a feature encoder to encode the features of multiple unimodal images separately, and then fuses the features of different modal images through a feature fusion module, and finally generates the target modal image. The similarity measure between the target image and the predicted image in the network is improved by introducing a dynamic weighted combined loss function based on the spatial domain and K-space domain. After experimental validation and quantitative comparison, the multi-modal fusion deep learning network proposed in this paper can effectively synthesize high-quality MRI fluid-attenuated inversion recovery (FLAIR) images. In summary, the method proposed in this paper can reduce MRI scanning time of the patient, as well as solve the clinical problem of missing FLAIR images or image quality that is difficult to meet diagnostic requirements.
Humans
;
Deep Learning
;
Magnetic Resonance Imaging/methods*
;
Image Processing, Computer-Assisted/methods*
3.Review on ultrasonographic diagnosis of thyroid diseases based on deep learning.
Fengyuan QI ; Min QIU ; Guohui WEI
Journal of Biomedical Engineering 2023;40(5):1027-1032
In recent years, the incidence of thyroid diseases has increased significantly and ultrasound examination is the first choice for the diagnosis of thyroid diseases. At the same time, the level of medical image analysis based on deep learning has been rapidly improved. Ultrasonic image analysis has made a series of milestone breakthroughs, and deep learning algorithms have shown strong performance in the field of medical image segmentation and classification. This article first elaborates on the application of deep learning algorithms in thyroid ultrasound image segmentation, feature extraction, and classification differentiation. Secondly, it summarizes the algorithms for deep learning processing multimodal ultrasound images. Finally, it points out the problems in thyroid ultrasound image diagnosis at the current stage and looks forward to future development directions. This study can promote the application of deep learning in clinical ultrasound image diagnosis of thyroid, and provide reference for doctors to diagnose thyroid disease.
Humans
;
Algorithms
;
Deep Learning
;
Image Processing, Computer-Assisted/methods*
;
Thyroid Diseases/diagnostic imaging*
;
Ultrasonography
4.MRI-derived radiomics models for diagnosis, aggressiveness, and prognosis evaluation in prostate cancer.
Xuehua ZHU ; Lizhi SHAO ; Zhenyu LIU ; Zenan LIU ; Jide HE ; Jiangang LIU ; Hao PING ; Jian LU
Journal of Zhejiang University. Science. B 2023;24(8):663-681
Prostate cancer (PCa) is a pernicious tumor with high heterogeneity, which creates a conundrum for making a precise diagnosis and choosing an optimal treatment approach. Multiparametric magnetic resonance imaging (mp-MRI) with anatomical and functional sequences has evolved as a routine and significant paradigm for the detection and characterization of PCa. Moreover, using radiomics to extract quantitative data has emerged as a promising field due to the rapid growth of artificial intelligence (AI) and image data processing. Radiomics acquires novel imaging biomarkers by extracting imaging signatures and establishes models for precise evaluation. Radiomics models provide a reliable and noninvasive alternative to aid in precision medicine, demonstrating advantages over traditional models based on clinicopathological parameters. The purpose of this review is to provide an overview of related studies of radiomics in PCa, specifically around the development and validation of radiomics models using MRI-derived image features. The current landscape of the literature, focusing mainly on PCa detection, aggressiveness, and prognosis evaluation, is reviewed and summarized. Rather than studies that exclusively focus on image biomarker identification and method optimization, models with high potential for universal clinical implementation are identified. Furthermore, we delve deeper into the critical concerns that can be addressed by different models and the obstacles that may arise in a clinical scenario. This review will encourage researchers to design models based on actual clinical needs, as well as assist urologists in gaining a better understanding of the promising results yielded by radiomics.
Male
;
Humans
;
Artificial Intelligence
;
Magnetic Resonance Imaging/methods*
;
Prostatic Neoplasms/diagnostic imaging*
;
Image Processing, Computer-Assisted/methods*
;
Precision Medicine
;
Retrospective Studies
5.Application of U-Net network in automatic image segmentation of adenoid and airway of nasopharynx.
Lu WANG ; Zebin LUO ; Jianhui NI ; Yan LI ; Liqing CHEN ; Shuwen GUAN ; Nannan ZHANG ; Xin WANG ; Rong CAI ; Yi GAO ; Qingfeng ZHANG
Journal of Clinical Otorhinolaryngology Head and Neck Surgery 2023;37(8):632-641
Objective:To explore the effect of fully automatic image segmentation of adenoid and nasopharyngeal airway by deep learning model based on U-Net network. Methods:From March 2021 to March 2022, 240 children underwent cone beam computed tomography(CBCT) in the Department of Otolaryngology, Head and Neck Surgery, General Hospital of Shenzhen University. 52 of them were selected for manual labeling of nasopharynx airway and adenoid, and then were trained and verified by the deep learning model. After applying the model to the remaining data, compare the differences between conventional two-dimensional indicators and deep learning three-dimensional indicators in 240 datasets. Results:For the 52 cases of modeling and training data sets, there was no significant difference between the prediction results of deep learning and the manual labeling results of doctors(P>0.05). The model evaluation index of nasopharyngeal airway volume: Mean Intersection over Union(MIOU) s (86.32±0.54)%; Dice Similarity Coefficient(DSC): (92.91±0.23)%; Accuracy: (95.92±0.25)%; Precision: (91.93±0.14)%; and the model evaluation index of Adenoid volume: MIOU: (86.28±0.61)%; DSC: (92.88±0.17)%; Accuracy: (95.90±0.29)%; Precision: (92.30±0.23)%. There was a positive correlation between the two-dimensional index A/N and the three-dimensional index AV/(AV+NAV) in 240 children of different age groups(P<0.05), and the correlation coefficient of 9-13 years old was 0.74. Conclusion:The deep learning model based on U-Net network has a good effect on the automatic image segmentation of adenoid and nasopharynx airway, and has high application value. The model has a certain generalization ability.
Child
;
Humans
;
Adolescent
;
Adenoids/diagnostic imaging*
;
Image Processing, Computer-Assisted/methods*
;
Pharynx
;
Cone-Beam Computed Tomography
;
Nose
6.Discussion on Technical Evaluation of Tongue Diagnosis Equipment of Traditional Chinese Medicine.
Yunping MI ; Shimei DUAN ; Qiang FU
Chinese Journal of Medical Instrumentation 2023;47(1):89-92
This study briefly introduces the tongue diagnostic equipment of traditional Chinese medicine. It analyzes and discusses the key points of technical evaluation of tongue diagnostic equipment from the aspects of product name, performance parameters, image processing functions, product use methods, clinical evaluation, etc. It analyzes the safety risks and effectiveness indicators of tongue diagnostic equipment, hoping to bring some help to the gradual standardization of tongue diagnostic equipment and the registration of enterprises.
Medicine, Chinese Traditional/methods*
;
Tongue
;
Image Processing, Computer-Assisted
;
Diagnostic Equipment
;
Reference Standards
8.CT and MRI fusion based on generative adversarial network and convolutional neural networks under image enhancement.
Yunpeng LIU ; Jin LI ; Yu WANG ; Wenli CAI ; Fei CHEN ; Wenjie LIU ; Xianhao MAO ; Kaifeng GAN ; Renfang WANG ; Dechao SUN ; Hong QIU ; Bangquan LIU
Journal of Biomedical Engineering 2023;40(2):208-216
Aiming at the problems of missing important features, inconspicuous details and unclear textures in the fusion of multimodal medical images, this paper proposes a method of computed tomography (CT) image and magnetic resonance imaging (MRI) image fusion using generative adversarial network (GAN) and convolutional neural network (CNN) under image enhancement. The generator aimed at high-frequency feature images and used double discriminators to target the fusion images after inverse transform; Then high-frequency feature images were fused by trained GAN model, and low-frequency feature images were fused by CNN pre-training model based on transfer learning. Experimental results showed that, compared with the current advanced fusion algorithm, the proposed method had more abundant texture details and clearer contour edge information in subjective representation. In the evaluation of objective indicators, Q AB/F, information entropy (IE), spatial frequency (SF), structural similarity (SSIM), mutual information (MI) and visual information fidelity for fusion (VIFF) were 2.0%, 6.3%, 7.0%, 5.5%, 9.0% and 3.3% higher than the best test results, respectively. The fused image can be effectively applied to medical diagnosis to further improve the diagnostic efficiency.
Image Processing, Computer-Assisted/methods*
;
Neural Networks, Computer
;
Tomography, X-Ray Computed
;
Magnetic Resonance Imaging/methods*
;
Algorithms
9.Segmentation of prostate region in magnetic resonance images based on improved V-Net.
Mingyuan GAO ; Shiju YAN ; Chengli SONG ; Zehua ZHU ; Erze XIE ; Boya FANG
Journal of Biomedical Engineering 2023;40(2):226-233
Magnetic resonance (MR) imaging is an important tool for prostate cancer diagnosis, and accurate segmentation of MR prostate regions by computer-aided diagnostic techniques is important for the diagnosis of prostate cancer. In this paper, we propose an improved end-to-end three-dimensional image segmentation network using a deep learning approach to the traditional V-Net network (V-Net) network in order to provide more accurate image segmentation results. Firstly, we fused the soft attention mechanism into the traditional V-Net's jump connection, and combined short jump connection and small convolutional kernel to further improve the network segmentation accuracy. Then the prostate region was segmented using the Prostate MR Image Segmentation 2012 (PROMISE 12) challenge dataset, and the model was evaluated using the dice similarity coefficient (DSC) and Hausdorff distance (HD). The DSC and HD values of the segmented model could reach 0.903 and 3.912 mm, respectively. The experimental results show that the algorithm in this paper can provide more accurate three-dimensional segmentation results, which can accurately and efficiently segment prostate MR images and provide a reliable basis for clinical diagnosis and treatment.
Male
;
Humans
;
Prostate/diagnostic imaging*
;
Image Processing, Computer-Assisted/methods*
;
Magnetic Resonance Imaging/methods*
;
Imaging, Three-Dimensional/methods*
;
Prostatic Neoplasms/diagnostic imaging*
10.Metal artifact reduction and clinical verification in oral and maxillofacial region based on deep learning.
Wei ZENG ; Shan Luo ZHOU ; Ji Xiang GUO ; Wei TANG
Chinese Journal of Stomatology 2023;58(6):540-546
Objective: To construct a kind of neural network for eliminating the metal artifacts in CT images by training the generative adversarial networks (GAN) model, so as to provide reference for clinical practice. Methods: The CT data of patients treated in the Department of Radiology, West China Hospital of Stomatology, Sichuan University from January 2017 to June 2022 were collected. A total of 1 000 cases of artifact-free CT data and 620 cases of metal artifact CT data were obtained, including 5 types of metal restorative materials, namely, fillings, crowns, titanium plates and screws, orthodontic brackets and metal foreign bodies. Four hundred metal artifact CT data and 1 000 artifact-free CT data were utilized for simulation synthesis, and 1 000 pairs of simulated artifacts and metal images and simulated metal images (200 pairs of each type) were constructed. Under the condition that the data of the five metal artifacts were equal, the entire data set was randomly (computer random) divided into a training set (800 pairs) and a test set (200 pairs). The former was used to train the GAN model, and the latter was used to evaluate the performance of the GAN model. The test set was evaluated quantitatively and the quantitative indexes were root-mean-square error (RMSE) and structural similarity index measure (SSIM). The trained GAN model was employed to eliminate the metal artifacts from the CT data of the remaining 220 clinical cases of metal artifact CT data, and the elimination results were evaluated by two senior attending doctors using the modified LiKert scale. Results: The RMSE values for artifact elimination of fillings, crowns, titanium plates and screws, orthodontic brackets and metal foreign bodies in test set were 0.018±0.004, 0.023±0.007, 0.015±0.003, 0.019±0.004, 0.024±0.008, respectively (F=1.29, P=0.274). The SSIM values were 0.963±0.023, 0.961±0.023, 0.965±0.013, 0.958±0.022, 0.957±0.026, respectively (F=2.22, P=0.069). The intra-group correlation coefficient of 2 evaluators was 0.972. For 220 clinical cases, the overall score of the modified LiKert scale was (3.73±1.13), indicating a satisfactory performance. The scores of modified LiKert scale for fillings, crowns, titanium plates and screws, orthodontic brackets and metal foreign bodies were (3.68±1.13), (3.67±1.16), (3.97±1.03), (3.83±1.14), (3.33±1.12), respectively (F=1.44, P=0.145). Conclusions: The metal artifact reduction GAN model constructed in this study can effectively remove the interference of metal artifacts and improve the image quality.
Humans
;
Tomography, X-Ray Computed/methods*
;
Deep Learning
;
Titanium
;
Neural Networks, Computer
;
Metals
;
Image Processing, Computer-Assisted/methods*
;
Algorithms

Result Analysis
Print
Save
E-mail