Preliminary exploration of the applications of five large language models in the field of oral auxiliary diagnosis, treatment and health consultation
10.3760/cma.j.cn112144-20241107-00418
- VernacularTitle:口腔辅助诊疗和健康咨询领域5种大语言模型应用初探
- Author:
Cailing HAN
1
;
Shizhu BAI
1
;
Tingmin ZHANG
1
;
Chen LIU
1
;
Yuchen LIU
1
;
Xiangxiang HU
1
;
Yimin ZHAO
1
Author Information
1. 空军军医大学口腔医院数字化中心 口颌系统重建与再生全国重点实验室 国家口腔疾病临床医学研究中心 陕西省口腔医学重点实验室,西安710032
- Publication Type:Journal Article
- Keywords:
Artificial intelligence;
Oral medicine;
Counseling;
Oral health;
Large language models;
Digital dentistry
- From:
Chinese Journal of Stomatology
2025;60(8):871-878
- CountryChina
- Language:Chinese
-
Abstract:
Objective:To evaluate the accuracy of the oral healthcare information provided by different large language models (LLM) to explore their feasibility and limitations in the application of oral auxiliary, treatment and health consultation.Methods:This study designed eight items comprising 47 questions in total related to the diagnosis and treatment of oral diseases [to assess the performance of LLM as an artificial intelligence (AI) medical assistant], and five items comprising 35 questions in total about oral health consultations (to assess the performance of LLM as a simulated doctor). These questions were answered individually by the five LLM models (Erine Bot, HuatuoGPT, Tongyi Qianwen, iFlytek Spark, ChatGPT). Two attending physicians with more than 5 years of experience independently rated the responses using the 3C criteria (correct, clear, concise), and the consistency between the raters was assessed using the Spearman rank correlation coefficient, and the Kruskal-Wallis test and Dunn post hoc test were used to assess the statistical differences between the models. Additionally, this study used 600 questions from the 2023 dental licensing examination to evaluate the time taken to answer, scores, and accuracy of each model.Results:As an AI medical assistant, LLM can assist doctors in diagnosis and treatment decision-making, with an inter-evaluator Spearman coefficient of 0.505 ( P<0.01). As a simulated doctor, LLM can carry out patient popularization, with an inter-evaluator Spearman coefficient of 0.533 ( P<0.01). The 3C scores of each model as an AI medical assistant and a simulated doctor were respectively: 2.00 (1.00, 3.00) and 2.00 (2.00, 3.00) points of Erine Bot, 1.00 (1.00, 2.00) and 2.00 (1.00, 2.00) points of HuatuoGPT, 2.00 (1.00, 2.00) and 2.00 (1.00, 3.00) points of Tongyi Qianwen, 2.00 (1.00, 2.00) and 2.00 (1.75, 2.25) points of iFlytek Spark, 3.00 (2.00, 3.00) and 3.00 (2.00, 3.00) points of ChatGPT (full score of 4 points). The Kruskal-Wallis test results showed that, as an AI medical assistant or a simulated doctor, there were statistically differences in the 3C scores among the five large language models (all P<0.001). The average score of the 5 LLMs on the dental licensing examination was 370.2, with an accuracy rate of 61.7% (370.2/600) and a time consumption of 94.6 min. Specifically, Erine Bot took 115 min, scored 363 points with an accuracy rate of 60.5% (363/600), HuatuoGPT took 224 min and scored 305 points with an accuracy rate of 50.8% (305/600), Tongyi Qianwen took 43 min, scored 438 points with an accuracy rate of 73.0% (438/600), iFlytek Spark took 32 min, scored 364 points with an accuracy rate of 60.7% (364/600), and ChatGPT took 59 min, scored 381 points with an accuracy rate of 63.5% (381/600). Conclusions:Based on the evaluation of LLM′s dual roles as an AI medical assistant and a simulated doctor, ChatGPT performes the best, with basically correct, clear and concise answers, followed by Erine Bot, Tongyi Qianwen and iFlytek Spark, with HuatuoGPT lagging behind significantly. In the dental licensing examination, all the 4 LLM, except for HuatuoGPT, reach the passing level, and the time consumpution for answering is significantly reduced compared to the 8 h required for the exam regulations in all of the five models. LLM has the feasibility of application in oral auxiliary, treatment and health consultation, and it can help both doctors and patients obtain medical information quickly. Howere, their outputs carry a risk of errors (since the 3C scoring results do not reach the full marks), so prudent judgment should be exercised when using them.