1.Erratum: Correction of Text in the Article “The Long-term Outcomes and Risk Factors of Complications After Fontan Surgery: From the Korean Fontan Registry (KFR)”
Sang-Yun LEE ; Soo-Jin KIM ; Chang-Ha LEE ; Chun Soo PARK ; Eun Seok CHOI ; Hoon KO ; Hyo Soon AN ; I Seok KANG ; Ja Kyoung YOON ; Jae Suk BAEK ; Jae Young LEE ; Jinyoung SONG ; Joowon LEE ; June HUH ; Kyung-Jin AHN ; Se Yong JUNG ; Seul Gi CHA ; Yeo Hyang KIM ; Youngseok LEE ; Sanghoon CHO
Korean Circulation Journal 2025;55(3):256-257
2.Erratum: Correction of Text in the Article “The Long-term Outcomes and Risk Factors of Complications After Fontan Surgery: From the Korean Fontan Registry (KFR)”
Sang-Yun LEE ; Soo-Jin KIM ; Chang-Ha LEE ; Chun Soo PARK ; Eun Seok CHOI ; Hoon KO ; Hyo Soon AN ; I Seok KANG ; Ja Kyoung YOON ; Jae Suk BAEK ; Jae Young LEE ; Jinyoung SONG ; Joowon LEE ; June HUH ; Kyung-Jin AHN ; Se Yong JUNG ; Seul Gi CHA ; Yeo Hyang KIM ; Youngseok LEE ; Sanghoon CHO
Korean Circulation Journal 2025;55(3):256-257
3.Deep Learning Technology for Classification of Thyroid Nodules Using Multi-View Ultrasound Images: Potential Benefits and Challenges in Clinical Application
Jinyoung KIM ; Min-Hee KIM ; Dong-Jun LIM ; Hankyeol LEE ; Jae Jun LEE ; Hyuk-Sang KWON ; Mee Kyoung KIM ; Ki-Ho SONG ; Tae-Jung KIM ; So Lyung JUNG ; Yong Oh LEE ; Ki-Hyun BAEK
Endocrinology and Metabolism 2025;40(2):216-224
Background:
This study aimed to evaluate the applicability of deep learning technology to thyroid ultrasound images for classification of thyroid nodules.
Methods:
This retrospective analysis included ultrasound images of patients with thyroid nodules investigated by fine-needle aspiration at the thyroid clinic of a single center from April 2010 to September 2012. Thyroid nodules with cytopathologic results of Bethesda category V (suspicious for malignancy) or VI (malignant) were defined as thyroid cancer. Multiple deep learning algorithms based on convolutional neural networks (CNNs) —ResNet, DenseNet, and EfficientNet—were utilized, and Siamese neural networks facilitated multi-view analysis of paired transverse and longitudinal ultrasound images.
Results:
Among 1,048 analyzed thyroid nodules from 943 patients, 306 (29%) were identified as thyroid cancer. In a subgroup analysis of transverse and longitudinal images, longitudinal images showed superior prediction ability. Multi-view modeling, based on paired transverse and longitudinal images, significantly improved the model performance; with an accuracy of 0.82 (95% confidence intervals [CI], 0.80 to 0.86) with ResNet50, 0.83 (95% CI, 0.83 to 0.88) with DenseNet201, and 0.81 (95% CI, 0.79 to 0.84) with EfficientNetv2_ s. Training with high-resolution images obtained using the latest equipment tended to improve model performance in association with increased sensitivity.
Conclusion
CNN algorithms applied to ultrasound images demonstrated substantial accuracy in thyroid nodule classification, indicating their potential as valuable tools for diagnosing thyroid cancer. However, in real-world clinical settings, it is important to aware that model performance may vary depending on the quality of images acquired by different physicians and imaging devices.
5.Deep Learning Technology for Classification of Thyroid Nodules Using Multi-View Ultrasound Images: Potential Benefits and Challenges in Clinical Application
Jinyoung KIM ; Min-Hee KIM ; Dong-Jun LIM ; Hankyeol LEE ; Jae Jun LEE ; Hyuk-Sang KWON ; Mee Kyoung KIM ; Ki-Ho SONG ; Tae-Jung KIM ; So Lyung JUNG ; Yong Oh LEE ; Ki-Hyun BAEK
Endocrinology and Metabolism 2025;40(2):216-224
Background:
This study aimed to evaluate the applicability of deep learning technology to thyroid ultrasound images for classification of thyroid nodules.
Methods:
This retrospective analysis included ultrasound images of patients with thyroid nodules investigated by fine-needle aspiration at the thyroid clinic of a single center from April 2010 to September 2012. Thyroid nodules with cytopathologic results of Bethesda category V (suspicious for malignancy) or VI (malignant) were defined as thyroid cancer. Multiple deep learning algorithms based on convolutional neural networks (CNNs) —ResNet, DenseNet, and EfficientNet—were utilized, and Siamese neural networks facilitated multi-view analysis of paired transverse and longitudinal ultrasound images.
Results:
Among 1,048 analyzed thyroid nodules from 943 patients, 306 (29%) were identified as thyroid cancer. In a subgroup analysis of transverse and longitudinal images, longitudinal images showed superior prediction ability. Multi-view modeling, based on paired transverse and longitudinal images, significantly improved the model performance; with an accuracy of 0.82 (95% confidence intervals [CI], 0.80 to 0.86) with ResNet50, 0.83 (95% CI, 0.83 to 0.88) with DenseNet201, and 0.81 (95% CI, 0.79 to 0.84) with EfficientNetv2_ s. Training with high-resolution images obtained using the latest equipment tended to improve model performance in association with increased sensitivity.
Conclusion
CNN algorithms applied to ultrasound images demonstrated substantial accuracy in thyroid nodule classification, indicating their potential as valuable tools for diagnosing thyroid cancer. However, in real-world clinical settings, it is important to aware that model performance may vary depending on the quality of images acquired by different physicians and imaging devices.
6.Deep Learning Technology for Classification of Thyroid Nodules Using Multi-View Ultrasound Images: Potential Benefits and Challenges in Clinical Application
Jinyoung KIM ; Min-Hee KIM ; Dong-Jun LIM ; Hankyeol LEE ; Jae Jun LEE ; Hyuk-Sang KWON ; Mee Kyoung KIM ; Ki-Ho SONG ; Tae-Jung KIM ; So Lyung JUNG ; Yong Oh LEE ; Ki-Hyun BAEK
Endocrinology and Metabolism 2025;40(2):216-224
Background:
This study aimed to evaluate the applicability of deep learning technology to thyroid ultrasound images for classification of thyroid nodules.
Methods:
This retrospective analysis included ultrasound images of patients with thyroid nodules investigated by fine-needle aspiration at the thyroid clinic of a single center from April 2010 to September 2012. Thyroid nodules with cytopathologic results of Bethesda category V (suspicious for malignancy) or VI (malignant) were defined as thyroid cancer. Multiple deep learning algorithms based on convolutional neural networks (CNNs) —ResNet, DenseNet, and EfficientNet—were utilized, and Siamese neural networks facilitated multi-view analysis of paired transverse and longitudinal ultrasound images.
Results:
Among 1,048 analyzed thyroid nodules from 943 patients, 306 (29%) were identified as thyroid cancer. In a subgroup analysis of transverse and longitudinal images, longitudinal images showed superior prediction ability. Multi-view modeling, based on paired transverse and longitudinal images, significantly improved the model performance; with an accuracy of 0.82 (95% confidence intervals [CI], 0.80 to 0.86) with ResNet50, 0.83 (95% CI, 0.83 to 0.88) with DenseNet201, and 0.81 (95% CI, 0.79 to 0.84) with EfficientNetv2_ s. Training with high-resolution images obtained using the latest equipment tended to improve model performance in association with increased sensitivity.
Conclusion
CNN algorithms applied to ultrasound images demonstrated substantial accuracy in thyroid nodule classification, indicating their potential as valuable tools for diagnosing thyroid cancer. However, in real-world clinical settings, it is important to aware that model performance may vary depending on the quality of images acquired by different physicians and imaging devices.
8.Erratum: Correction of Text in the Article “The Long-term Outcomes and Risk Factors of Complications After Fontan Surgery: From the Korean Fontan Registry (KFR)”
Sang-Yun LEE ; Soo-Jin KIM ; Chang-Ha LEE ; Chun Soo PARK ; Eun Seok CHOI ; Hoon KO ; Hyo Soon AN ; I Seok KANG ; Ja Kyoung YOON ; Jae Suk BAEK ; Jae Young LEE ; Jinyoung SONG ; Joowon LEE ; June HUH ; Kyung-Jin AHN ; Se Yong JUNG ; Seul Gi CHA ; Yeo Hyang KIM ; Youngseok LEE ; Sanghoon CHO
Korean Circulation Journal 2025;55(3):256-257
9.Deep Learning Technology for Classification of Thyroid Nodules Using Multi-View Ultrasound Images: Potential Benefits and Challenges in Clinical Application
Jinyoung KIM ; Min-Hee KIM ; Dong-Jun LIM ; Hankyeol LEE ; Jae Jun LEE ; Hyuk-Sang KWON ; Mee Kyoung KIM ; Ki-Ho SONG ; Tae-Jung KIM ; So Lyung JUNG ; Yong Oh LEE ; Ki-Hyun BAEK
Endocrinology and Metabolism 2025;40(2):216-224
Background:
This study aimed to evaluate the applicability of deep learning technology to thyroid ultrasound images for classification of thyroid nodules.
Methods:
This retrospective analysis included ultrasound images of patients with thyroid nodules investigated by fine-needle aspiration at the thyroid clinic of a single center from April 2010 to September 2012. Thyroid nodules with cytopathologic results of Bethesda category V (suspicious for malignancy) or VI (malignant) were defined as thyroid cancer. Multiple deep learning algorithms based on convolutional neural networks (CNNs) —ResNet, DenseNet, and EfficientNet—were utilized, and Siamese neural networks facilitated multi-view analysis of paired transverse and longitudinal ultrasound images.
Results:
Among 1,048 analyzed thyroid nodules from 943 patients, 306 (29%) were identified as thyroid cancer. In a subgroup analysis of transverse and longitudinal images, longitudinal images showed superior prediction ability. Multi-view modeling, based on paired transverse and longitudinal images, significantly improved the model performance; with an accuracy of 0.82 (95% confidence intervals [CI], 0.80 to 0.86) with ResNet50, 0.83 (95% CI, 0.83 to 0.88) with DenseNet201, and 0.81 (95% CI, 0.79 to 0.84) with EfficientNetv2_ s. Training with high-resolution images obtained using the latest equipment tended to improve model performance in association with increased sensitivity.
Conclusion
CNN algorithms applied to ultrasound images demonstrated substantial accuracy in thyroid nodule classification, indicating their potential as valuable tools for diagnosing thyroid cancer. However, in real-world clinical settings, it is important to aware that model performance may vary depending on the quality of images acquired by different physicians and imaging devices.

Result Analysis
Print
Save
E-mail