1.Unsupervised speckle noise reduction technique for clinical ultrasound imaging
Dongkyu JUNG ; Myeongkyun KANG ; Sang Hyun PARK ; Nizar GUEZZI ; Jaesok YU
Ultrasonography 2024;43(5):327-344
Purpose:
Deep learning–based image enhancement has significant potential in the field of ultrasound image processing, as it can accurately model complicated nonlinear artifacts and noise, such as ultrasonic speckle patterns. However, training deep learning networks to acquire reference images that are clean and free of noise presents significant challenges. This study introduces an unsupervised deep learning framework, termed speckle-to-speckle (S2S), designed for speckle and noise suppression. This framework can complete its training without the need for clean (speckle-free) reference images.
Methods:
The proposed network leverages statistical reasoning for the mutual training of two in vivo images, each with distinct speckle patterns and noise. It then infers speckle- and noise-free images without needing clean reference images. This approach significantly reduces the time, cost, and effort experts need to invest in annotating reference images manually.
Results:
The experimental results demonstrated that the proposed approach outperformed existing techniques in terms of the signal-to-noise ratio, contrast-to-noise ratio, structural similarity index, edge preservation index, and processing time (up to 86 times faster). It also performed excellently on images obtained from ultrasound scanners other than the ones used in this work.
Conclusion
S2S demonstrates the potential of employing an unsupervised learning-based technique in medical imaging applications, where acquiring a ground truth reference is challenging.
2.Unsupervised speckle noise reduction technique for clinical ultrasound imaging
Dongkyu JUNG ; Myeongkyun KANG ; Sang Hyun PARK ; Nizar GUEZZI ; Jaesok YU
Ultrasonography 2024;43(5):327-344
Purpose:
Deep learning–based image enhancement has significant potential in the field of ultrasound image processing, as it can accurately model complicated nonlinear artifacts and noise, such as ultrasonic speckle patterns. However, training deep learning networks to acquire reference images that are clean and free of noise presents significant challenges. This study introduces an unsupervised deep learning framework, termed speckle-to-speckle (S2S), designed for speckle and noise suppression. This framework can complete its training without the need for clean (speckle-free) reference images.
Methods:
The proposed network leverages statistical reasoning for the mutual training of two in vivo images, each with distinct speckle patterns and noise. It then infers speckle- and noise-free images without needing clean reference images. This approach significantly reduces the time, cost, and effort experts need to invest in annotating reference images manually.
Results:
The experimental results demonstrated that the proposed approach outperformed existing techniques in terms of the signal-to-noise ratio, contrast-to-noise ratio, structural similarity index, edge preservation index, and processing time (up to 86 times faster). It also performed excellently on images obtained from ultrasound scanners other than the ones used in this work.
Conclusion
S2S demonstrates the potential of employing an unsupervised learning-based technique in medical imaging applications, where acquiring a ground truth reference is challenging.
3.Unsupervised speckle noise reduction technique for clinical ultrasound imaging
Dongkyu JUNG ; Myeongkyun KANG ; Sang Hyun PARK ; Nizar GUEZZI ; Jaesok YU
Ultrasonography 2024;43(5):327-344
Purpose:
Deep learning–based image enhancement has significant potential in the field of ultrasound image processing, as it can accurately model complicated nonlinear artifacts and noise, such as ultrasonic speckle patterns. However, training deep learning networks to acquire reference images that are clean and free of noise presents significant challenges. This study introduces an unsupervised deep learning framework, termed speckle-to-speckle (S2S), designed for speckle and noise suppression. This framework can complete its training without the need for clean (speckle-free) reference images.
Methods:
The proposed network leverages statistical reasoning for the mutual training of two in vivo images, each with distinct speckle patterns and noise. It then infers speckle- and noise-free images without needing clean reference images. This approach significantly reduces the time, cost, and effort experts need to invest in annotating reference images manually.
Results:
The experimental results demonstrated that the proposed approach outperformed existing techniques in terms of the signal-to-noise ratio, contrast-to-noise ratio, structural similarity index, edge preservation index, and processing time (up to 86 times faster). It also performed excellently on images obtained from ultrasound scanners other than the ones used in this work.
Conclusion
S2S demonstrates the potential of employing an unsupervised learning-based technique in medical imaging applications, where acquiring a ground truth reference is challenging.
4.Unsupervised speckle noise reduction technique for clinical ultrasound imaging
Dongkyu JUNG ; Myeongkyun KANG ; Sang Hyun PARK ; Nizar GUEZZI ; Jaesok YU
Ultrasonography 2024;43(5):327-344
Purpose:
Deep learning–based image enhancement has significant potential in the field of ultrasound image processing, as it can accurately model complicated nonlinear artifacts and noise, such as ultrasonic speckle patterns. However, training deep learning networks to acquire reference images that are clean and free of noise presents significant challenges. This study introduces an unsupervised deep learning framework, termed speckle-to-speckle (S2S), designed for speckle and noise suppression. This framework can complete its training without the need for clean (speckle-free) reference images.
Methods:
The proposed network leverages statistical reasoning for the mutual training of two in vivo images, each with distinct speckle patterns and noise. It then infers speckle- and noise-free images without needing clean reference images. This approach significantly reduces the time, cost, and effort experts need to invest in annotating reference images manually.
Results:
The experimental results demonstrated that the proposed approach outperformed existing techniques in terms of the signal-to-noise ratio, contrast-to-noise ratio, structural similarity index, edge preservation index, and processing time (up to 86 times faster). It also performed excellently on images obtained from ultrasound scanners other than the ones used in this work.
Conclusion
S2S demonstrates the potential of employing an unsupervised learning-based technique in medical imaging applications, where acquiring a ground truth reference is challenging.
5.Unsupervised speckle noise reduction technique for clinical ultrasound imaging
Dongkyu JUNG ; Myeongkyun KANG ; Sang Hyun PARK ; Nizar GUEZZI ; Jaesok YU
Ultrasonography 2024;43(5):327-344
Purpose:
Deep learning–based image enhancement has significant potential in the field of ultrasound image processing, as it can accurately model complicated nonlinear artifacts and noise, such as ultrasonic speckle patterns. However, training deep learning networks to acquire reference images that are clean and free of noise presents significant challenges. This study introduces an unsupervised deep learning framework, termed speckle-to-speckle (S2S), designed for speckle and noise suppression. This framework can complete its training without the need for clean (speckle-free) reference images.
Methods:
The proposed network leverages statistical reasoning for the mutual training of two in vivo images, each with distinct speckle patterns and noise. It then infers speckle- and noise-free images without needing clean reference images. This approach significantly reduces the time, cost, and effort experts need to invest in annotating reference images manually.
Results:
The experimental results demonstrated that the proposed approach outperformed existing techniques in terms of the signal-to-noise ratio, contrast-to-noise ratio, structural similarity index, edge preservation index, and processing time (up to 86 times faster). It also performed excellently on images obtained from ultrasound scanners other than the ones used in this work.
Conclusion
S2S demonstrates the potential of employing an unsupervised learning-based technique in medical imaging applications, where acquiring a ground truth reference is challenging.
6.Quantitative Assessment of Chest CT Patterns in COVID-19 and Bacterial Pneumonia Patients: a Deep Learning Perspective
Myeongkyun KANG ; Kyung Soo HONG ; Philip CHIKONTWE ; Miguel LUNA ; Jong Geol JANG ; Jongsoo PARK ; Kyeong-Cheol SHIN ; Sang Hyun PARK ; June Hong AHN
Journal of Korean Medical Science 2021;36(5):e46-
Background:
It is difficult to distinguish subtle differences shown in computed tomography (CT) images of coronavirus disease 2019 (COVID-19) and bacterial pneumonia patients, which often leads to an inaccurate diagnosis. It is desirable to design and evaluate interpretable feature extraction techniques to describe the patient's condition.
Methods:
This is a retrospective cohort study of 170 confirmed patients with COVID-19 or bacterial pneumonia acquired at Yeungnam University Hospital in Daegu, Korea. The lung and lesion regions were segmented to crop the lesion into 2D patches to train a classifier model that could differentiate between COVID-19 and bacterial pneumonia. The K-means algorithm was used to cluster deep features extracted by the trained model into 20 groups.Each lesion patch cluster was described by a characteristic imaging term for comparison.For each CT image containing multiple lesions, a histogram of lesion types was constructed using the cluster information. Finally, a Support Vector Machine classifier was trained with the histogram and radiomics features to distinguish diseases and severity.
Results:
The 20 clusters constructed from 170 patients were reviewed based on common radiographic appearance types. Two clusters showed typical findings of COVID-19, with two other clusters showing typical findings related to bacterial pneumonia. Notably, there is one cluster that showed bilateral diffuse ground-glass opacities (GGOs) in the central and peripheral lungs and was considered to be a key factor for severity classification. The proposed method achieved an accuracy of 91.2% for classifying COVID-19 and bacterial pneumonia patients with 95% reported for severity classification. The CT quantitative parameters represented by the values of cluster 8 were correlated with existing laboratory data and clinical parameters.
Conclusion
Deep chest CT analysis with constructed lesion clusters revealed well-known COVID-19 CT manifestations comparable to manual CT analysis. The constructed histogram features improved accuracy for both diseases and severity classification, and showedcorrelations with laboratory data and clinical parameters. The constructed histogram features can provide guidance for improved analysis and treatment of COVID-19.
7.Quantitative Assessment of Chest CT Patterns in COVID-19 and Bacterial Pneumonia Patients: a Deep Learning Perspective
Myeongkyun KANG ; Kyung Soo HONG ; Philip CHIKONTWE ; Miguel LUNA ; Jong Geol JANG ; Jongsoo PARK ; Kyeong-Cheol SHIN ; Sang Hyun PARK ; June Hong AHN
Journal of Korean Medical Science 2021;36(5):e46-
Background:
It is difficult to distinguish subtle differences shown in computed tomography (CT) images of coronavirus disease 2019 (COVID-19) and bacterial pneumonia patients, which often leads to an inaccurate diagnosis. It is desirable to design and evaluate interpretable feature extraction techniques to describe the patient's condition.
Methods:
This is a retrospective cohort study of 170 confirmed patients with COVID-19 or bacterial pneumonia acquired at Yeungnam University Hospital in Daegu, Korea. The lung and lesion regions were segmented to crop the lesion into 2D patches to train a classifier model that could differentiate between COVID-19 and bacterial pneumonia. The K-means algorithm was used to cluster deep features extracted by the trained model into 20 groups.Each lesion patch cluster was described by a characteristic imaging term for comparison.For each CT image containing multiple lesions, a histogram of lesion types was constructed using the cluster information. Finally, a Support Vector Machine classifier was trained with the histogram and radiomics features to distinguish diseases and severity.
Results:
The 20 clusters constructed from 170 patients were reviewed based on common radiographic appearance types. Two clusters showed typical findings of COVID-19, with two other clusters showing typical findings related to bacterial pneumonia. Notably, there is one cluster that showed bilateral diffuse ground-glass opacities (GGOs) in the central and peripheral lungs and was considered to be a key factor for severity classification. The proposed method achieved an accuracy of 91.2% for classifying COVID-19 and bacterial pneumonia patients with 95% reported for severity classification. The CT quantitative parameters represented by the values of cluster 8 were correlated with existing laboratory data and clinical parameters.
Conclusion
Deep chest CT analysis with constructed lesion clusters revealed well-known COVID-19 CT manifestations comparable to manual CT analysis. The constructed histogram features improved accuracy for both diseases and severity classification, and showedcorrelations with laboratory data and clinical parameters. The constructed histogram features can provide guidance for improved analysis and treatment of COVID-19.