1.Adherence of Studies on Large Language Models for Medical Applications Published in Leading Medical Journals According to the MI-CLEAR-LLM Checklist
Ji Su KO ; Hwon HEO ; Chong Hyun SUH ; Jeho YI ; Woo Hyun SHIM
Korean Journal of Radiology 2025;26(4):304-312
Objective:
To evaluate the adherence of large language model (LLM)-based healthcare research to the Minimum Reporting Items for Clear Evaluation of Accuracy Reports of Large Language Models in Healthcare (MI-CLEAR-LLM) checklist, a framework designed to enhance the transparency and reproducibility of studies on the accuracy of LLMs for medical applications.
Materials and Methods:
A systematic PubMed search was conducted to identify articles on LLM performance published in high-ranking clinical medicine journals (the top 10% in each of the 59 specialties according to the 2023 Journal Impact Factor) from November 30, 2022, through June 25, 2024. Data on the six MI-CLEAR-LLM checklist items: 1) identification and specification of the LLM used, 2) stochasticity handling, 3) prompt wording and syntax, 4) prompt structuring, 5) prompt testing and optimization, and 6) independence of the test data—were independently extracted by two reviewers, and adherence was calculated for each item.
Results:
Of 159 studies, 100% (159/159) reported the name of the LLM, 96.9% (154/159) reported the version, and 91.8% (146/159) reported the manufacturer. However, only 54.1% (86/159) reported the training data cutoff date, 6.3% (10/159) documented access to web-based information, and 50.9% (81/159) provided the date of the query attempts. Clear documentation regarding stochasticity management was provided in 15.1% (24/159) of the studies. Regarding prompt details, 49.1% (78/159) provided exact prompt wording and syntax but only 34.0% (54/159) documented prompt-structuring practices. While 46.5% (74/159) of the studies detailed prompt testing, only 15.7% (25/159) explained the rationale for specific word choices. Test data independence was reported for only 13.2% (21/159) of the studies, and 56.6% (43/76) provided URLs for internet-sourced test data.
Conclusion
Although basic LLM identification details were relatively well reported, other key aspects, including stochasticity, prompts, and test data, were frequently underreported. Enhancing adherence to the MI-CLEAR-LLM checklist will allow LLM research to achieve greater transparency and will foster more credible and reliable future studies.
2.Adherence of Studies on Large Language Models for Medical Applications Published in Leading Medical Journals According to the MI-CLEAR-LLM Checklist
Ji Su KO ; Hwon HEO ; Chong Hyun SUH ; Jeho YI ; Woo Hyun SHIM
Korean Journal of Radiology 2025;26(4):304-312
Objective:
To evaluate the adherence of large language model (LLM)-based healthcare research to the Minimum Reporting Items for Clear Evaluation of Accuracy Reports of Large Language Models in Healthcare (MI-CLEAR-LLM) checklist, a framework designed to enhance the transparency and reproducibility of studies on the accuracy of LLMs for medical applications.
Materials and Methods:
A systematic PubMed search was conducted to identify articles on LLM performance published in high-ranking clinical medicine journals (the top 10% in each of the 59 specialties according to the 2023 Journal Impact Factor) from November 30, 2022, through June 25, 2024. Data on the six MI-CLEAR-LLM checklist items: 1) identification and specification of the LLM used, 2) stochasticity handling, 3) prompt wording and syntax, 4) prompt structuring, 5) prompt testing and optimization, and 6) independence of the test data—were independently extracted by two reviewers, and adherence was calculated for each item.
Results:
Of 159 studies, 100% (159/159) reported the name of the LLM, 96.9% (154/159) reported the version, and 91.8% (146/159) reported the manufacturer. However, only 54.1% (86/159) reported the training data cutoff date, 6.3% (10/159) documented access to web-based information, and 50.9% (81/159) provided the date of the query attempts. Clear documentation regarding stochasticity management was provided in 15.1% (24/159) of the studies. Regarding prompt details, 49.1% (78/159) provided exact prompt wording and syntax but only 34.0% (54/159) documented prompt-structuring practices. While 46.5% (74/159) of the studies detailed prompt testing, only 15.7% (25/159) explained the rationale for specific word choices. Test data independence was reported for only 13.2% (21/159) of the studies, and 56.6% (43/76) provided URLs for internet-sourced test data.
Conclusion
Although basic LLM identification details were relatively well reported, other key aspects, including stochasticity, prompts, and test data, were frequently underreported. Enhancing adherence to the MI-CLEAR-LLM checklist will allow LLM research to achieve greater transparency and will foster more credible and reliable future studies.
3.Adherence of Studies on Large Language Models for Medical Applications Published in Leading Medical Journals According to the MI-CLEAR-LLM Checklist
Ji Su KO ; Hwon HEO ; Chong Hyun SUH ; Jeho YI ; Woo Hyun SHIM
Korean Journal of Radiology 2025;26(4):304-312
Objective:
To evaluate the adherence of large language model (LLM)-based healthcare research to the Minimum Reporting Items for Clear Evaluation of Accuracy Reports of Large Language Models in Healthcare (MI-CLEAR-LLM) checklist, a framework designed to enhance the transparency and reproducibility of studies on the accuracy of LLMs for medical applications.
Materials and Methods:
A systematic PubMed search was conducted to identify articles on LLM performance published in high-ranking clinical medicine journals (the top 10% in each of the 59 specialties according to the 2023 Journal Impact Factor) from November 30, 2022, through June 25, 2024. Data on the six MI-CLEAR-LLM checklist items: 1) identification and specification of the LLM used, 2) stochasticity handling, 3) prompt wording and syntax, 4) prompt structuring, 5) prompt testing and optimization, and 6) independence of the test data—were independently extracted by two reviewers, and adherence was calculated for each item.
Results:
Of 159 studies, 100% (159/159) reported the name of the LLM, 96.9% (154/159) reported the version, and 91.8% (146/159) reported the manufacturer. However, only 54.1% (86/159) reported the training data cutoff date, 6.3% (10/159) documented access to web-based information, and 50.9% (81/159) provided the date of the query attempts. Clear documentation regarding stochasticity management was provided in 15.1% (24/159) of the studies. Regarding prompt details, 49.1% (78/159) provided exact prompt wording and syntax but only 34.0% (54/159) documented prompt-structuring practices. While 46.5% (74/159) of the studies detailed prompt testing, only 15.7% (25/159) explained the rationale for specific word choices. Test data independence was reported for only 13.2% (21/159) of the studies, and 56.6% (43/76) provided URLs for internet-sourced test data.
Conclusion
Although basic LLM identification details were relatively well reported, other key aspects, including stochasticity, prompts, and test data, were frequently underreported. Enhancing adherence to the MI-CLEAR-LLM checklist will allow LLM research to achieve greater transparency and will foster more credible and reliable future studies.
4.Adherence of Studies on Large Language Models for Medical Applications Published in Leading Medical Journals According to the MI-CLEAR-LLM Checklist
Ji Su KO ; Hwon HEO ; Chong Hyun SUH ; Jeho YI ; Woo Hyun SHIM
Korean Journal of Radiology 2025;26(4):304-312
Objective:
To evaluate the adherence of large language model (LLM)-based healthcare research to the Minimum Reporting Items for Clear Evaluation of Accuracy Reports of Large Language Models in Healthcare (MI-CLEAR-LLM) checklist, a framework designed to enhance the transparency and reproducibility of studies on the accuracy of LLMs for medical applications.
Materials and Methods:
A systematic PubMed search was conducted to identify articles on LLM performance published in high-ranking clinical medicine journals (the top 10% in each of the 59 specialties according to the 2023 Journal Impact Factor) from November 30, 2022, through June 25, 2024. Data on the six MI-CLEAR-LLM checklist items: 1) identification and specification of the LLM used, 2) stochasticity handling, 3) prompt wording and syntax, 4) prompt structuring, 5) prompt testing and optimization, and 6) independence of the test data—were independently extracted by two reviewers, and adherence was calculated for each item.
Results:
Of 159 studies, 100% (159/159) reported the name of the LLM, 96.9% (154/159) reported the version, and 91.8% (146/159) reported the manufacturer. However, only 54.1% (86/159) reported the training data cutoff date, 6.3% (10/159) documented access to web-based information, and 50.9% (81/159) provided the date of the query attempts. Clear documentation regarding stochasticity management was provided in 15.1% (24/159) of the studies. Regarding prompt details, 49.1% (78/159) provided exact prompt wording and syntax but only 34.0% (54/159) documented prompt-structuring practices. While 46.5% (74/159) of the studies detailed prompt testing, only 15.7% (25/159) explained the rationale for specific word choices. Test data independence was reported for only 13.2% (21/159) of the studies, and 56.6% (43/76) provided URLs for internet-sourced test data.
Conclusion
Although basic LLM identification details were relatively well reported, other key aspects, including stochasticity, prompts, and test data, were frequently underreported. Enhancing adherence to the MI-CLEAR-LLM checklist will allow LLM research to achieve greater transparency and will foster more credible and reliable future studies.
5.Adherence of Studies on Large Language Models for Medical Applications Published in Leading Medical Journals According to the MI-CLEAR-LLM Checklist
Ji Su KO ; Hwon HEO ; Chong Hyun SUH ; Jeho YI ; Woo Hyun SHIM
Korean Journal of Radiology 2025;26(4):304-312
Objective:
To evaluate the adherence of large language model (LLM)-based healthcare research to the Minimum Reporting Items for Clear Evaluation of Accuracy Reports of Large Language Models in Healthcare (MI-CLEAR-LLM) checklist, a framework designed to enhance the transparency and reproducibility of studies on the accuracy of LLMs for medical applications.
Materials and Methods:
A systematic PubMed search was conducted to identify articles on LLM performance published in high-ranking clinical medicine journals (the top 10% in each of the 59 specialties according to the 2023 Journal Impact Factor) from November 30, 2022, through June 25, 2024. Data on the six MI-CLEAR-LLM checklist items: 1) identification and specification of the LLM used, 2) stochasticity handling, 3) prompt wording and syntax, 4) prompt structuring, 5) prompt testing and optimization, and 6) independence of the test data—were independently extracted by two reviewers, and adherence was calculated for each item.
Results:
Of 159 studies, 100% (159/159) reported the name of the LLM, 96.9% (154/159) reported the version, and 91.8% (146/159) reported the manufacturer. However, only 54.1% (86/159) reported the training data cutoff date, 6.3% (10/159) documented access to web-based information, and 50.9% (81/159) provided the date of the query attempts. Clear documentation regarding stochasticity management was provided in 15.1% (24/159) of the studies. Regarding prompt details, 49.1% (78/159) provided exact prompt wording and syntax but only 34.0% (54/159) documented prompt-structuring practices. While 46.5% (74/159) of the studies detailed prompt testing, only 15.7% (25/159) explained the rationale for specific word choices. Test data independence was reported for only 13.2% (21/159) of the studies, and 56.6% (43/76) provided URLs for internet-sourced test data.
Conclusion
Although basic LLM identification details were relatively well reported, other key aspects, including stochasticity, prompts, and test data, were frequently underreported. Enhancing adherence to the MI-CLEAR-LLM checklist will allow LLM research to achieve greater transparency and will foster more credible and reliable future studies.
6.Dietary interventions to reduce heavy metal exposure in antepartum and postpartum women: a systematic review
Su Ji HEO ; Nalae MOON ; Ju Hee KIM
Women’s Health Nursing 2024;30(4):265-276
Heavy metals, which are persistent in the environment and toxic, can accumulate in the body and cause organ damage, which may further negatively affect perinatal women and their fetuses. Therefore, this systematic review was conducted to evaluate the effectiveness of dietary interventions to reduce heavy metal exposure in antepartum and postpartum women. Methods: We searched five databases (PubMed, Embase, Scopus, Web of Science, and Cochrane Library) for randomized controlled trials that provided dietary interventions for antepartum and postpartum women. Quality assessments were conducted independently by two reviewers using the Cochrane Risk-of-Bias tool, a quality assessment tool for randomized controlled trials. Results: A total of seven studies were included. The studies were conducted in six countries, with interventions categorized into “nutritional supplements,” “food supply,” and “educational” strategies. Interventions involving nutritional supplements, such as calcium and probiotics, primarily reduced heavy metal levels in the blood and minimized toxicity. Food-based interventions, including specific fruit consumption, decreased heavy metal concentrations in breast milk. Educational interventions effectively promoted behavioral changes, such as adopting diets low in mercury. The studies demonstrated a low overall risk of bias, supporting the reliability of the findings. These strategies underscore the effectiveness of dietary approaches in mitigating heavy metal exposure and improving maternal and child health. Conclusion: The main findings underscore the importance of dietary interventions in reducing heavy metal exposure. This emphasizes the critical role of nursing in guiding dietary strategies to minimize exposure risks, ultimately supporting maternal and fetal health during pregnancy.
7.Dietary interventions to reduce heavy metal exposure in antepartum and postpartum women: a systematic review
Su Ji HEO ; Nalae MOON ; Ju Hee KIM
Women’s Health Nursing 2024;30(4):265-276
Heavy metals, which are persistent in the environment and toxic, can accumulate in the body and cause organ damage, which may further negatively affect perinatal women and their fetuses. Therefore, this systematic review was conducted to evaluate the effectiveness of dietary interventions to reduce heavy metal exposure in antepartum and postpartum women. Methods: We searched five databases (PubMed, Embase, Scopus, Web of Science, and Cochrane Library) for randomized controlled trials that provided dietary interventions for antepartum and postpartum women. Quality assessments were conducted independently by two reviewers using the Cochrane Risk-of-Bias tool, a quality assessment tool for randomized controlled trials. Results: A total of seven studies were included. The studies were conducted in six countries, with interventions categorized into “nutritional supplements,” “food supply,” and “educational” strategies. Interventions involving nutritional supplements, such as calcium and probiotics, primarily reduced heavy metal levels in the blood and minimized toxicity. Food-based interventions, including specific fruit consumption, decreased heavy metal concentrations in breast milk. Educational interventions effectively promoted behavioral changes, such as adopting diets low in mercury. The studies demonstrated a low overall risk of bias, supporting the reliability of the findings. These strategies underscore the effectiveness of dietary approaches in mitigating heavy metal exposure and improving maternal and child health. Conclusion: The main findings underscore the importance of dietary interventions in reducing heavy metal exposure. This emphasizes the critical role of nursing in guiding dietary strategies to minimize exposure risks, ultimately supporting maternal and fetal health during pregnancy.
8.Dietary interventions to reduce heavy metal exposure in antepartum and postpartum women: a systematic review
Su Ji HEO ; Nalae MOON ; Ju Hee KIM
Women’s Health Nursing 2024;30(4):265-276
Heavy metals, which are persistent in the environment and toxic, can accumulate in the body and cause organ damage, which may further negatively affect perinatal women and their fetuses. Therefore, this systematic review was conducted to evaluate the effectiveness of dietary interventions to reduce heavy metal exposure in antepartum and postpartum women. Methods: We searched five databases (PubMed, Embase, Scopus, Web of Science, and Cochrane Library) for randomized controlled trials that provided dietary interventions for antepartum and postpartum women. Quality assessments were conducted independently by two reviewers using the Cochrane Risk-of-Bias tool, a quality assessment tool for randomized controlled trials. Results: A total of seven studies were included. The studies were conducted in six countries, with interventions categorized into “nutritional supplements,” “food supply,” and “educational” strategies. Interventions involving nutritional supplements, such as calcium and probiotics, primarily reduced heavy metal levels in the blood and minimized toxicity. Food-based interventions, including specific fruit consumption, decreased heavy metal concentrations in breast milk. Educational interventions effectively promoted behavioral changes, such as adopting diets low in mercury. The studies demonstrated a low overall risk of bias, supporting the reliability of the findings. These strategies underscore the effectiveness of dietary approaches in mitigating heavy metal exposure and improving maternal and child health. Conclusion: The main findings underscore the importance of dietary interventions in reducing heavy metal exposure. This emphasizes the critical role of nursing in guiding dietary strategies to minimize exposure risks, ultimately supporting maternal and fetal health during pregnancy.
9.Prevalence and Associated Factors of Depression and Anxiety Among Healthcare Workers During the Coronavirus Disease 2019 Pandemic:A Nationwide Study in Korea
Shinwon LEE ; Soyoon HWANG ; Ki Tae KWON ; EunKyung NAM ; Un Sun CHUNG ; Shin-Woo KIM ; Hyun-Ha CHANG ; Yoonjung KIM ; Sohyun BAE ; Ji-Yeon SHIN ; Sang-geun BAE ; Hyun Wook RYOO ; Juhwan JEONG ; NamHee OH ; So Hee LEE ; Yeonjae KIM ; Chang Kyung KANG ; Hye Yoon PARK ; Jiho PARK ; Se Yoon PARK ; Bongyoung KIM ; Hae Suk CHEONG ; Ji Woong SON ; Su Jin LIM ; Seongcheol YUN ; Won Sup OH ; Kyung-Hwa PARK ; Ju-Yeon LEE ; Sang Taek HEO ; Ji-yeon LEE
Journal of Korean Medical Science 2024;39(13):e120-
Background:
A healthcare system’s collapse due to a pandemic, such as the coronavirus disease 2019 (COVID-19), can expose healthcare workers (HCWs) to various mental health problems. This study aimed to investigate the impact of the COVID-19 pandemic on the depression and anxiety of HCWs.
Methods:
A nationwide questionnaire-based survey was conducted on HCWs who worked in healthcare facilities and public health centers in Korea in December 2020. Patient Health Questionnaire-9 (PHQ-9) and Generalized Anxiety Disorder-7 (GAD-7) were used to measure depression and anxiety. To investigate factors associated with depression and anxiety, stepwise multiple logistic regression analysis was performed.
Results:
A total of 1,425 participating HCWs were included. The mean depression score (PHQ-9) of HCWs before and after COVID-19 increased from 2.37 to 5.39, and the mean anxiety score (GAD-7) increased from 1.41 to 3.41. The proportion of HCWs with moderate to severe depression (PHQ-9 ≥ 10) increased from 3.8% before COVID-19 to 19.5% after COVID-19, whereas that of HCWs with moderate to severe anxiety (GAD-7 ≥ 10) increased from 2.0% to 10.1%. In our study, insomnia, chronic fatigue symptoms and physical symptoms after COVID-19, anxiety score (GAD-7) after COVID-19, living alone, and exhaustion were positively correlated with depression. Furthermore, post-traumatic stress symptoms, stress score (Global Assessment of Recent Stress), depression score (PHQ-9) after COVID-19, and exhaustion were positively correlated with anxiety.
Conclusion
In Korea, during the COVID-19 pandemic, HCWs commonly suffered from mental health problems, including depression and anxiety. Regularly checking the physical and mental health problems of HCWs during the COVID-19 pandemic is crucial, and social support and strategy are needed to reduce the heavy workload and psychological distress of HCWs.
10.Risk Factors of Postpartum Depression Among Korean Women:An Analysis Based on the Korean Pregnancy Outcome Study (KPOS)
So Hyun SHIM ; Su Young LEE ; Inkyung JUNG ; Seok-Jae HEO ; You Jung HAN ; Dong Wook KWAK ; Min Hyoung KIM ; Hee Jin PARK ; Jin Hoon CHUNG ; Ji Hyae LIM ; Moon Young KIM ; Dong Hyun CHA ; Sung Shin SHIM ; Hee Young CHO ; Hyun Mee RYU
Journal of Korean Medical Science 2024;39(3):e31-
Background:
Postpartum depression (PPD) can negatively affect infant well-being and child development. Although the frequency and risk factors of PPD symptoms might vary depending on the country and culture, there is limited research on these risk factors among Korean women. This study aimed to elucidate the potential risk factors of PPD throughout pregnancy to help improve PPD screening and prevention in Korean women.
Methods:
The pregnant women at 12 gestational weeks (GW) were enrolled from two obstetric specialized hospitals from March 2013 to November 2017. A questionnaire survey was administered at 12 GW, 24 GW, 36 GW, and 4 weeks postpartum. Depressive symptoms were assessed using the Edinburgh Postnatal Depression Scale, and PPD was defined as a score of ≥ 10.
Results:
PPD was prevalent in 16.3% (410/2,512) of the participants. Depressive feeling at 12 GW and postpartum factors of stress, relationship with children, depressive feeling, fear, sadness, and neonatal intensive care unit admission of baby were significantly associated with a higher risk of PPD. Meanwhile, high postpartum quality of life and marital satisfaction at postpartum period were significantly associated with a lower risk of PPD. We developed a model for predicting PPD using factors as mentioned above and it had an area under the curve of 0.871.
Conclusion
Depressive feeling at 12 GW and postpartum stress, fear, sadness, relationship with children, low quality of life, and low marital satisfaction increased the risk of PPD. A risk model that comprises significant factors can effectively predict PPD and can be helpful for its prevention and appropriate treatment.

Result Analysis
Print
Save
E-mail