1.The future of artificial intelligence for physicians.
Journal of the Korean Medical Association 2016;59(6):410-412
Artificial Intelligence (AI) to support the medical decision-making process has long been both an interest and concern of physicians and the public. However, the introduction of open source software, supercomputers, and a variety of industry innovations has accelerated the progress of the development of AI in clinical decision support systems. This article summarizes the current trends and challenges in the medical field, and presents how AI can improve healthcare systems by increasing efficiency and decreasing costs. At the same time, it emphasizes the centrality of the role of physicians in utilizing AI as a tool to supplement their decisions as they provide patient-oriented care.
Artificial Intelligence*
;
Clinical Decision-Making
;
Decision Support Systems, Clinical
;
Delivery of Health Care
3.The Latest Trends in the Use of Deep Learning in Radiology Illustrated Through the Stages of Deep Learning Algorithm Development
Kyoung Doo SONG ; Myeongchan KIM ; Synho DO
Journal of the Korean Radiological Society 2019;80(2):202-212
Recently, considerable progress has been made in interpreting perceptual information through artificial intelligence, allowing better interpretation of highly complex data by machines. Furthermore, the applications of artificial intelligence, represented by deep learning technology, to the fields of medical and biomedical research are increasing exponentially. In this article, we will explain the stages of deep learning algorithm development in the field of medical imaging, namely topic selection, data collection, data exploration and refinement, algorithm development, algorithm evaluation, and clinical application; we will also discuss the latest trends for each stage.
4.Explainable & Safe Artificial Intelligence in Radiology
Journal of the Korean Society of Radiology 2024;85(5):834-847
Artificial intelligence (AI) is transforming radiology with improved diagnostic accuracy and efficiency, but prediction uncertainty remains a critical challenge. This review examines key sources of uncertainty—out-of-distribution, aleatoric, and model uncertainties—and highlights the importance of independent confidence metrics and explainable AI for safe integration. Independent confidence metrics assess the reliability of AI predictions, while explainable AI provides transparency, enhancing collaboration between AI and radiologists.The development of zero-error tolerance models, designed to minimize errors, sets new standards for safety. Addressing these challenges will enable AI to become a trusted partner in radiology, advancing care standards and patient outcomes.
5.Explainable & Safe Artificial Intelligence in Radiology
Journal of the Korean Society of Radiology 2024;85(5):834-847
Artificial intelligence (AI) is transforming radiology with improved diagnostic accuracy and efficiency, but prediction uncertainty remains a critical challenge. This review examines key sources of uncertainty—out-of-distribution, aleatoric, and model uncertainties—and highlights the importance of independent confidence metrics and explainable AI for safe integration. Independent confidence metrics assess the reliability of AI predictions, while explainable AI provides transparency, enhancing collaboration between AI and radiologists.The development of zero-error tolerance models, designed to minimize errors, sets new standards for safety. Addressing these challenges will enable AI to become a trusted partner in radiology, advancing care standards and patient outcomes.
6.Explainable & Safe Artificial Intelligence in Radiology
Journal of the Korean Society of Radiology 2024;85(5):834-847
Artificial intelligence (AI) is transforming radiology with improved diagnostic accuracy and efficiency, but prediction uncertainty remains a critical challenge. This review examines key sources of uncertainty—out-of-distribution, aleatoric, and model uncertainties—and highlights the importance of independent confidence metrics and explainable AI for safe integration. Independent confidence metrics assess the reliability of AI predictions, while explainable AI provides transparency, enhancing collaboration between AI and radiologists.The development of zero-error tolerance models, designed to minimize errors, sets new standards for safety. Addressing these challenges will enable AI to become a trusted partner in radiology, advancing care standards and patient outcomes.