1.Adherence of Studies on Large Language Models for Medical Applications Published in Leading Medical Journals According to the MI-CLEAR-LLM Checklist
Ji Su KO ; Hwon HEO ; Chong Hyun SUH ; Jeho YI ; Woo Hyun SHIM
Korean Journal of Radiology 2025;26(4):304-312
Objective:
To evaluate the adherence of large language model (LLM)-based healthcare research to the Minimum Reporting Items for Clear Evaluation of Accuracy Reports of Large Language Models in Healthcare (MI-CLEAR-LLM) checklist, a framework designed to enhance the transparency and reproducibility of studies on the accuracy of LLMs for medical applications.
Materials and Methods:
A systematic PubMed search was conducted to identify articles on LLM performance published in high-ranking clinical medicine journals (the top 10% in each of the 59 specialties according to the 2023 Journal Impact Factor) from November 30, 2022, through June 25, 2024. Data on the six MI-CLEAR-LLM checklist items: 1) identification and specification of the LLM used, 2) stochasticity handling, 3) prompt wording and syntax, 4) prompt structuring, 5) prompt testing and optimization, and 6) independence of the test data—were independently extracted by two reviewers, and adherence was calculated for each item.
Results:
Of 159 studies, 100% (159/159) reported the name of the LLM, 96.9% (154/159) reported the version, and 91.8% (146/159) reported the manufacturer. However, only 54.1% (86/159) reported the training data cutoff date, 6.3% (10/159) documented access to web-based information, and 50.9% (81/159) provided the date of the query attempts. Clear documentation regarding stochasticity management was provided in 15.1% (24/159) of the studies. Regarding prompt details, 49.1% (78/159) provided exact prompt wording and syntax but only 34.0% (54/159) documented prompt-structuring practices. While 46.5% (74/159) of the studies detailed prompt testing, only 15.7% (25/159) explained the rationale for specific word choices. Test data independence was reported for only 13.2% (21/159) of the studies, and 56.6% (43/76) provided URLs for internet-sourced test data.
Conclusion
Although basic LLM identification details were relatively well reported, other key aspects, including stochasticity, prompts, and test data, were frequently underreported. Enhancing adherence to the MI-CLEAR-LLM checklist will allow LLM research to achieve greater transparency and will foster more credible and reliable future studies.
2.Adherence of Studies on Large Language Models for Medical Applications Published in Leading Medical Journals According to the MI-CLEAR-LLM Checklist
Ji Su KO ; Hwon HEO ; Chong Hyun SUH ; Jeho YI ; Woo Hyun SHIM
Korean Journal of Radiology 2025;26(4):304-312
Objective:
To evaluate the adherence of large language model (LLM)-based healthcare research to the Minimum Reporting Items for Clear Evaluation of Accuracy Reports of Large Language Models in Healthcare (MI-CLEAR-LLM) checklist, a framework designed to enhance the transparency and reproducibility of studies on the accuracy of LLMs for medical applications.
Materials and Methods:
A systematic PubMed search was conducted to identify articles on LLM performance published in high-ranking clinical medicine journals (the top 10% in each of the 59 specialties according to the 2023 Journal Impact Factor) from November 30, 2022, through June 25, 2024. Data on the six MI-CLEAR-LLM checklist items: 1) identification and specification of the LLM used, 2) stochasticity handling, 3) prompt wording and syntax, 4) prompt structuring, 5) prompt testing and optimization, and 6) independence of the test data—were independently extracted by two reviewers, and adherence was calculated for each item.
Results:
Of 159 studies, 100% (159/159) reported the name of the LLM, 96.9% (154/159) reported the version, and 91.8% (146/159) reported the manufacturer. However, only 54.1% (86/159) reported the training data cutoff date, 6.3% (10/159) documented access to web-based information, and 50.9% (81/159) provided the date of the query attempts. Clear documentation regarding stochasticity management was provided in 15.1% (24/159) of the studies. Regarding prompt details, 49.1% (78/159) provided exact prompt wording and syntax but only 34.0% (54/159) documented prompt-structuring practices. While 46.5% (74/159) of the studies detailed prompt testing, only 15.7% (25/159) explained the rationale for specific word choices. Test data independence was reported for only 13.2% (21/159) of the studies, and 56.6% (43/76) provided URLs for internet-sourced test data.
Conclusion
Although basic LLM identification details were relatively well reported, other key aspects, including stochasticity, prompts, and test data, were frequently underreported. Enhancing adherence to the MI-CLEAR-LLM checklist will allow LLM research to achieve greater transparency and will foster more credible and reliable future studies.
3.Adherence of Studies on Large Language Models for Medical Applications Published in Leading Medical Journals According to the MI-CLEAR-LLM Checklist
Ji Su KO ; Hwon HEO ; Chong Hyun SUH ; Jeho YI ; Woo Hyun SHIM
Korean Journal of Radiology 2025;26(4):304-312
Objective:
To evaluate the adherence of large language model (LLM)-based healthcare research to the Minimum Reporting Items for Clear Evaluation of Accuracy Reports of Large Language Models in Healthcare (MI-CLEAR-LLM) checklist, a framework designed to enhance the transparency and reproducibility of studies on the accuracy of LLMs for medical applications.
Materials and Methods:
A systematic PubMed search was conducted to identify articles on LLM performance published in high-ranking clinical medicine journals (the top 10% in each of the 59 specialties according to the 2023 Journal Impact Factor) from November 30, 2022, through June 25, 2024. Data on the six MI-CLEAR-LLM checklist items: 1) identification and specification of the LLM used, 2) stochasticity handling, 3) prompt wording and syntax, 4) prompt structuring, 5) prompt testing and optimization, and 6) independence of the test data—were independently extracted by two reviewers, and adherence was calculated for each item.
Results:
Of 159 studies, 100% (159/159) reported the name of the LLM, 96.9% (154/159) reported the version, and 91.8% (146/159) reported the manufacturer. However, only 54.1% (86/159) reported the training data cutoff date, 6.3% (10/159) documented access to web-based information, and 50.9% (81/159) provided the date of the query attempts. Clear documentation regarding stochasticity management was provided in 15.1% (24/159) of the studies. Regarding prompt details, 49.1% (78/159) provided exact prompt wording and syntax but only 34.0% (54/159) documented prompt-structuring practices. While 46.5% (74/159) of the studies detailed prompt testing, only 15.7% (25/159) explained the rationale for specific word choices. Test data independence was reported for only 13.2% (21/159) of the studies, and 56.6% (43/76) provided URLs for internet-sourced test data.
Conclusion
Although basic LLM identification details were relatively well reported, other key aspects, including stochasticity, prompts, and test data, were frequently underreported. Enhancing adherence to the MI-CLEAR-LLM checklist will allow LLM research to achieve greater transparency and will foster more credible and reliable future studies.
4.Adherence of Studies on Large Language Models for Medical Applications Published in Leading Medical Journals According to the MI-CLEAR-LLM Checklist
Ji Su KO ; Hwon HEO ; Chong Hyun SUH ; Jeho YI ; Woo Hyun SHIM
Korean Journal of Radiology 2025;26(4):304-312
Objective:
To evaluate the adherence of large language model (LLM)-based healthcare research to the Minimum Reporting Items for Clear Evaluation of Accuracy Reports of Large Language Models in Healthcare (MI-CLEAR-LLM) checklist, a framework designed to enhance the transparency and reproducibility of studies on the accuracy of LLMs for medical applications.
Materials and Methods:
A systematic PubMed search was conducted to identify articles on LLM performance published in high-ranking clinical medicine journals (the top 10% in each of the 59 specialties according to the 2023 Journal Impact Factor) from November 30, 2022, through June 25, 2024. Data on the six MI-CLEAR-LLM checklist items: 1) identification and specification of the LLM used, 2) stochasticity handling, 3) prompt wording and syntax, 4) prompt structuring, 5) prompt testing and optimization, and 6) independence of the test data—were independently extracted by two reviewers, and adherence was calculated for each item.
Results:
Of 159 studies, 100% (159/159) reported the name of the LLM, 96.9% (154/159) reported the version, and 91.8% (146/159) reported the manufacturer. However, only 54.1% (86/159) reported the training data cutoff date, 6.3% (10/159) documented access to web-based information, and 50.9% (81/159) provided the date of the query attempts. Clear documentation regarding stochasticity management was provided in 15.1% (24/159) of the studies. Regarding prompt details, 49.1% (78/159) provided exact prompt wording and syntax but only 34.0% (54/159) documented prompt-structuring practices. While 46.5% (74/159) of the studies detailed prompt testing, only 15.7% (25/159) explained the rationale for specific word choices. Test data independence was reported for only 13.2% (21/159) of the studies, and 56.6% (43/76) provided URLs for internet-sourced test data.
Conclusion
Although basic LLM identification details were relatively well reported, other key aspects, including stochasticity, prompts, and test data, were frequently underreported. Enhancing adherence to the MI-CLEAR-LLM checklist will allow LLM research to achieve greater transparency and will foster more credible and reliable future studies.
5.Adherence of Studies on Large Language Models for Medical Applications Published in Leading Medical Journals According to the MI-CLEAR-LLM Checklist
Ji Su KO ; Hwon HEO ; Chong Hyun SUH ; Jeho YI ; Woo Hyun SHIM
Korean Journal of Radiology 2025;26(4):304-312
Objective:
To evaluate the adherence of large language model (LLM)-based healthcare research to the Minimum Reporting Items for Clear Evaluation of Accuracy Reports of Large Language Models in Healthcare (MI-CLEAR-LLM) checklist, a framework designed to enhance the transparency and reproducibility of studies on the accuracy of LLMs for medical applications.
Materials and Methods:
A systematic PubMed search was conducted to identify articles on LLM performance published in high-ranking clinical medicine journals (the top 10% in each of the 59 specialties according to the 2023 Journal Impact Factor) from November 30, 2022, through June 25, 2024. Data on the six MI-CLEAR-LLM checklist items: 1) identification and specification of the LLM used, 2) stochasticity handling, 3) prompt wording and syntax, 4) prompt structuring, 5) prompt testing and optimization, and 6) independence of the test data—were independently extracted by two reviewers, and adherence was calculated for each item.
Results:
Of 159 studies, 100% (159/159) reported the name of the LLM, 96.9% (154/159) reported the version, and 91.8% (146/159) reported the manufacturer. However, only 54.1% (86/159) reported the training data cutoff date, 6.3% (10/159) documented access to web-based information, and 50.9% (81/159) provided the date of the query attempts. Clear documentation regarding stochasticity management was provided in 15.1% (24/159) of the studies. Regarding prompt details, 49.1% (78/159) provided exact prompt wording and syntax but only 34.0% (54/159) documented prompt-structuring practices. While 46.5% (74/159) of the studies detailed prompt testing, only 15.7% (25/159) explained the rationale for specific word choices. Test data independence was reported for only 13.2% (21/159) of the studies, and 56.6% (43/76) provided URLs for internet-sourced test data.
Conclusion
Although basic LLM identification details were relatively well reported, other key aspects, including stochasticity, prompts, and test data, were frequently underreported. Enhancing adherence to the MI-CLEAR-LLM checklist will allow LLM research to achieve greater transparency and will foster more credible and reliable future studies.
6.Dietary interventions to reduce heavy metal exposure in antepartum and postpartum women: a systematic review
Su Ji HEO ; Nalae MOON ; Ju Hee KIM
Women’s Health Nursing 2024;30(4):265-276
Heavy metals, which are persistent in the environment and toxic, can accumulate in the body and cause organ damage, which may further negatively affect perinatal women and their fetuses. Therefore, this systematic review was conducted to evaluate the effectiveness of dietary interventions to reduce heavy metal exposure in antepartum and postpartum women. Methods: We searched five databases (PubMed, Embase, Scopus, Web of Science, and Cochrane Library) for randomized controlled trials that provided dietary interventions for antepartum and postpartum women. Quality assessments were conducted independently by two reviewers using the Cochrane Risk-of-Bias tool, a quality assessment tool for randomized controlled trials. Results: A total of seven studies were included. The studies were conducted in six countries, with interventions categorized into “nutritional supplements,” “food supply,” and “educational” strategies. Interventions involving nutritional supplements, such as calcium and probiotics, primarily reduced heavy metal levels in the blood and minimized toxicity. Food-based interventions, including specific fruit consumption, decreased heavy metal concentrations in breast milk. Educational interventions effectively promoted behavioral changes, such as adopting diets low in mercury. The studies demonstrated a low overall risk of bias, supporting the reliability of the findings. These strategies underscore the effectiveness of dietary approaches in mitigating heavy metal exposure and improving maternal and child health. Conclusion: The main findings underscore the importance of dietary interventions in reducing heavy metal exposure. This emphasizes the critical role of nursing in guiding dietary strategies to minimize exposure risks, ultimately supporting maternal and fetal health during pregnancy.
7.Risk Factors of Postpartum Depression Among Korean Women:An Analysis Based on the Korean Pregnancy Outcome Study (KPOS)
So Hyun SHIM ; Su Young LEE ; Inkyung JUNG ; Seok-Jae HEO ; You Jung HAN ; Dong Wook KWAK ; Min Hyoung KIM ; Hee Jin PARK ; Jin Hoon CHUNG ; Ji Hyae LIM ; Moon Young KIM ; Dong Hyun CHA ; Sung Shin SHIM ; Hee Young CHO ; Hyun Mee RYU
Journal of Korean Medical Science 2024;39(3):e31-
Background:
Postpartum depression (PPD) can negatively affect infant well-being and child development. Although the frequency and risk factors of PPD symptoms might vary depending on the country and culture, there is limited research on these risk factors among Korean women. This study aimed to elucidate the potential risk factors of PPD throughout pregnancy to help improve PPD screening and prevention in Korean women.
Methods:
The pregnant women at 12 gestational weeks (GW) were enrolled from two obstetric specialized hospitals from March 2013 to November 2017. A questionnaire survey was administered at 12 GW, 24 GW, 36 GW, and 4 weeks postpartum. Depressive symptoms were assessed using the Edinburgh Postnatal Depression Scale, and PPD was defined as a score of ≥ 10.
Results:
PPD was prevalent in 16.3% (410/2,512) of the participants. Depressive feeling at 12 GW and postpartum factors of stress, relationship with children, depressive feeling, fear, sadness, and neonatal intensive care unit admission of baby were significantly associated with a higher risk of PPD. Meanwhile, high postpartum quality of life and marital satisfaction at postpartum period were significantly associated with a lower risk of PPD. We developed a model for predicting PPD using factors as mentioned above and it had an area under the curve of 0.871.
Conclusion
Depressive feeling at 12 GW and postpartum stress, fear, sadness, relationship with children, low quality of life, and low marital satisfaction increased the risk of PPD. A risk model that comprises significant factors can effectively predict PPD and can be helpful for its prevention and appropriate treatment.
8.Difference in Baseline Antimicrobial Prescription Patterns of Hospitals According to Participation in the National Antimicrobial Monitoring and Feedback System in Korea
Jihye SHIN ; Ji Young PARK ; Jungmi CHAE ; Hyung-Sook KIM ; Song Mi MOON ; Eunjeong HEO ; Se Yoon PARK ; Dong Min SEO ; Ha-Jin CHUN ; Yong Chan KIM ; Myung Jin LEE ; Kyungmin HUH ; Hyo Jung PARK ; I Ji YUN ; Su Jin JEONG ; Jun Yong CHOI ; Dong-Sook KIM ; Bongyoung KIM ;
Journal of Korean Medical Science 2024;39(29):e216-
This study aimed to evaluate the differences in the baseline characteristics and patterns of antibiotic usage among hospitals based on their participation in the Korea National Antimicrobial Use Analysis System (KONAS). We obtained claims data from the National Health Insurance for inpatients admitted to all secondary- and tertiary-care hospitals between January 2020 and December 2021 in Korea. 15.9% (58/395) of hospitals were KONAS participants, among which the proportion of hospitals with > 900 beds (31.0% vs.2.6%, P < 0.001) and tertiary care (50.0% vs. 5.2%, P < 0.001) was higher than that among non-participants. The consumption of antibiotics targeting antimicrobial-resistant gram positive bacteria (33.7 vs. 27.1 days of therapy [DOT]/1,000 patient-days, P = 0.019) and antibiotics predominantly used for resistant gram-negative bacteria (4.8 vs. 3.7 DOT/1,000 patient-days, P = 0.034) was higher in KONAS-participating versus -non-participating hospitals. The current KONAS data do not fully represent all secondary- and tertiary-care hospitals in Korea; thus, the KONAS results should be interpreted with caution.
9.Dietary interventions to reduce heavy metal exposure in antepartum and postpartum women: a systematic review
Su Ji HEO ; Nalae MOON ; Ju Hee KIM
Women’s Health Nursing 2024;30(4):265-276
Heavy metals, which are persistent in the environment and toxic, can accumulate in the body and cause organ damage, which may further negatively affect perinatal women and their fetuses. Therefore, this systematic review was conducted to evaluate the effectiveness of dietary interventions to reduce heavy metal exposure in antepartum and postpartum women. Methods: We searched five databases (PubMed, Embase, Scopus, Web of Science, and Cochrane Library) for randomized controlled trials that provided dietary interventions for antepartum and postpartum women. Quality assessments were conducted independently by two reviewers using the Cochrane Risk-of-Bias tool, a quality assessment tool for randomized controlled trials. Results: A total of seven studies were included. The studies were conducted in six countries, with interventions categorized into “nutritional supplements,” “food supply,” and “educational” strategies. Interventions involving nutritional supplements, such as calcium and probiotics, primarily reduced heavy metal levels in the blood and minimized toxicity. Food-based interventions, including specific fruit consumption, decreased heavy metal concentrations in breast milk. Educational interventions effectively promoted behavioral changes, such as adopting diets low in mercury. The studies demonstrated a low overall risk of bias, supporting the reliability of the findings. These strategies underscore the effectiveness of dietary approaches in mitigating heavy metal exposure and improving maternal and child health. Conclusion: The main findings underscore the importance of dietary interventions in reducing heavy metal exposure. This emphasizes the critical role of nursing in guiding dietary strategies to minimize exposure risks, ultimately supporting maternal and fetal health during pregnancy.
10.Dietary interventions to reduce heavy metal exposure in antepartum and postpartum women: a systematic review
Su Ji HEO ; Nalae MOON ; Ju Hee KIM
Women’s Health Nursing 2024;30(4):265-276
Heavy metals, which are persistent in the environment and toxic, can accumulate in the body and cause organ damage, which may further negatively affect perinatal women and their fetuses. Therefore, this systematic review was conducted to evaluate the effectiveness of dietary interventions to reduce heavy metal exposure in antepartum and postpartum women. Methods: We searched five databases (PubMed, Embase, Scopus, Web of Science, and Cochrane Library) for randomized controlled trials that provided dietary interventions for antepartum and postpartum women. Quality assessments were conducted independently by two reviewers using the Cochrane Risk-of-Bias tool, a quality assessment tool for randomized controlled trials. Results: A total of seven studies were included. The studies were conducted in six countries, with interventions categorized into “nutritional supplements,” “food supply,” and “educational” strategies. Interventions involving nutritional supplements, such as calcium and probiotics, primarily reduced heavy metal levels in the blood and minimized toxicity. Food-based interventions, including specific fruit consumption, decreased heavy metal concentrations in breast milk. Educational interventions effectively promoted behavioral changes, such as adopting diets low in mercury. The studies demonstrated a low overall risk of bias, supporting the reliability of the findings. These strategies underscore the effectiveness of dietary approaches in mitigating heavy metal exposure and improving maternal and child health. Conclusion: The main findings underscore the importance of dietary interventions in reducing heavy metal exposure. This emphasizes the critical role of nursing in guiding dietary strategies to minimize exposure risks, ultimately supporting maternal and fetal health during pregnancy.

Result Analysis
Print
Save
E-mail