ChatGPT goes to the operating room: evaluating GPT-4performance and its potential in surgical education and training in the era of large language models
10.4174/astr.2023.104.5.269
- Author:
Namkee OH
1
;
Gyu-Seong CHOI
;
Woo Yong LEE
Author Information
1. Department of Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
- Publication Type:ORIGINAL ARTICLE
- From:Annals of Surgical Treatment and Research
2023;104(5):269-273
- CountryRepublic of Korea
- Language:English
-
Abstract:
Purpose:This study aimed to assess the performance of ChatGPT, specifically the GPT-3.5 and GPT-4 models, in understanding complex surgical clinical information and its potential implications for surgical education and training.
Methods:The dataset comprised 280 questions from the Korean general surgery board exams conducted between 2020 and 2022. Both GPT-3.5 and GPT-4 models were evaluated, and their performances were compared using McNemar test.
Results:GPT-3.5 achieved an overall accuracy of 46.8%, while GPT-4 demonstrated a significant improvement with an overall accuracy of 76.4%, indicating a notable difference in performance between the models (P < 0.001). GPT-4 also exhibited consistent performance across all subspecialties, with accuracy rates ranging from 63.6% to 83.3%.
Conclusion:ChatGPT, particularly GPT-4, demonstrates a remarkable ability to understand complex surgical clinical information, achieving an accuracy rate of 76.4% on the Korean general surgery board exam. However, it is important to recognize the limitations of large language models and ensure that they are used in conjunction with human expertise and judgment.