1.In-Vivo Proton Magnetic Resonance Spectroscopy of 2-Hydroxyglutarate in Isocitrate Dehydrogenase-Mutated Gliomas: A Technical Review for Neuroradiologists.
Hyeonjin KIM ; Sungjin KIM ; Hyeong Hun LEE ; Hwon HEO
Korean Journal of Radiology 2016;17(5):620-632
The diagnostic and prognostic potential of an onco-metabolite, 2-hydroxyglutarate (2HG) as a proton magnetic resonance spectroscopy (1H-MRS) detectable biomarker of the isocitrate dehydrogenase (IDH)-mutated (IDH-MT) gliomas has drawn attention of neuroradiologists recently. However, due to severe spectral overlap with background signals, quantification of 2HG can be very challenging. In this technical review for neuroradiologists, first, the biochemistry of 2HG and its significance in the diagnosis of IDH-MT gliomas are summarized. Secondly, various 1H-MRS methods used in the previous studies are outlined. Finally, wereview previous in vivo studies, and discuss the current status of 1H-MRS in the diagnosis of IDH-MT gliomas.
Biochemistry
;
Diagnosis
;
Glioma*
;
Isocitrate Dehydrogenase
;
Magnetic Resonance Spectroscopy
;
Proton Magnetic Resonance Spectroscopy*
;
Protons*
2.Effect of Berberine on Cell Survival in the Developing Rat Brain Damaged by MK-801.
Taehwan LEE ; Hwon HEO ; Yunhee KIM
Experimental Neurobiology 2010;19(3):140-145
Berberine is an isoquinoline alkaloid isolated from goldenthread, Coptidis Rhizoma and shown to have many biological and pharmacological effects. We previously reported that berberine promotes cell survival and differentiation of neural stem cells. To examine whether berberine has survival promoting effect on damaged neuronal cells, we generated a cellular model under oxidative stress and an neonatal animal model of degenerating brain disease by injecting MK-801. MK801, a noncompetitive antagonist of N-methyl-d-aspartate (NMDA) receptors, acts as a neurotoxin in developing rats by inhibiting NMDA receptors and induce neuronal cell death. We found that the survival rate of the SH-SY5Y cells under oxidative stress was increased by 287% and 344%, when treated with 1.5 and 3.0microg/ml berberine, respectively. In the developing rats injected by MK801, we observed that TUNEL positive apoptotic cells were outspread in entire brain. The cell death was decreased more than 3 fold in the brains of the MK-801-induced neurodegenerative animal model when berberine was treated to the model animals. This suggests that berberine promotes activity dependent cell survival mediated by NMDA receptor because berberine is known to activate neurons by blocking K+ current or lowering the threshold of the action potential. Taken together, berberine has neuroprotective effect on damaged neurons and neurodegenerating brains of neonatal animal model induced by MK-801 administration.
Action Potentials
;
Animals
;
Animals, Newborn
;
Berberine
;
Brain
;
Brain Diseases
;
Cell Death
;
Cell Survival
;
Dizocilpine Maleate
;
In Situ Nick-End Labeling
;
Isoquinolines
;
Models, Animal
;
N-Methylaspartate
;
Neural Stem Cells
;
Neurons
;
Neuroprotective Agents
;
Oxidative Stress
;
Rats
;
Receptors, N-Methyl-D-Aspartate
;
Survival Rate
3.Current Updates and Unmet Needs of Brain MRI-Based Artificial Intelligence Software for Patients With Neurodegenerative Diseases in the Republic of Korea
So Yeong JEONG ; Chong Hyun SUH ; Hwon HEO ; Woo Hyun SHIM ; Sang Joon KIM
Investigative Magnetic Resonance Imaging 2022;26(4):237-245
In aging societies, incidences of neurodegenerative diseases such as Alzheimer’s disease and Parkinson’s disease are increasing. Neurodegenerative diseases are bringing main challenges to the healthcare system in today’s world. Analyzing characteristic imaging patterns of patients with neurodegenerative diseases is important. Since objective and reliable imaging assessments and precise analyses can lead to early diagnosis of neurodegenerative diseases, imaging patterns are being increasingly investigated. Artificial intelligence (AI) analyzing brain MRI has been applied to neurodegenerative diseases, providing added value in early diagnosis. MRI-based AI software has been developed and studied worldwide, with some AI-based software already being used in actual clinical care. Currently, there are MRI-based volumetry and segmentation software available. There is also an unmet demand for the application of AI in neurodegenerative diseases. Here, we review current status and unmet needs for application of AI in neurodegenerative diseases. We also discuss current limitations of AI, suggestion for AI-based software, and how it can be clinically applied in the future.
5.Adherence of Studies on Large Language Models for Medical Applications Published in Leading Medical Journals According to the MI-CLEAR-LLM Checklist
Ji Su KO ; Hwon HEO ; Chong Hyun SUH ; Jeho YI ; Woo Hyun SHIM
Korean Journal of Radiology 2025;26(4):304-312
Objective:
To evaluate the adherence of large language model (LLM)-based healthcare research to the Minimum Reporting Items for Clear Evaluation of Accuracy Reports of Large Language Models in Healthcare (MI-CLEAR-LLM) checklist, a framework designed to enhance the transparency and reproducibility of studies on the accuracy of LLMs for medical applications.
Materials and Methods:
A systematic PubMed search was conducted to identify articles on LLM performance published in high-ranking clinical medicine journals (the top 10% in each of the 59 specialties according to the 2023 Journal Impact Factor) from November 30, 2022, through June 25, 2024. Data on the six MI-CLEAR-LLM checklist items: 1) identification and specification of the LLM used, 2) stochasticity handling, 3) prompt wording and syntax, 4) prompt structuring, 5) prompt testing and optimization, and 6) independence of the test data—were independently extracted by two reviewers, and adherence was calculated for each item.
Results:
Of 159 studies, 100% (159/159) reported the name of the LLM, 96.9% (154/159) reported the version, and 91.8% (146/159) reported the manufacturer. However, only 54.1% (86/159) reported the training data cutoff date, 6.3% (10/159) documented access to web-based information, and 50.9% (81/159) provided the date of the query attempts. Clear documentation regarding stochasticity management was provided in 15.1% (24/159) of the studies. Regarding prompt details, 49.1% (78/159) provided exact prompt wording and syntax but only 34.0% (54/159) documented prompt-structuring practices. While 46.5% (74/159) of the studies detailed prompt testing, only 15.7% (25/159) explained the rationale for specific word choices. Test data independence was reported for only 13.2% (21/159) of the studies, and 56.6% (43/76) provided URLs for internet-sourced test data.
Conclusion
Although basic LLM identification details were relatively well reported, other key aspects, including stochasticity, prompts, and test data, were frequently underreported. Enhancing adherence to the MI-CLEAR-LLM checklist will allow LLM research to achieve greater transparency and will foster more credible and reliable future studies.
7.Adherence of Studies on Large Language Models for Medical Applications Published in Leading Medical Journals According to the MI-CLEAR-LLM Checklist
Ji Su KO ; Hwon HEO ; Chong Hyun SUH ; Jeho YI ; Woo Hyun SHIM
Korean Journal of Radiology 2025;26(4):304-312
Objective:
To evaluate the adherence of large language model (LLM)-based healthcare research to the Minimum Reporting Items for Clear Evaluation of Accuracy Reports of Large Language Models in Healthcare (MI-CLEAR-LLM) checklist, a framework designed to enhance the transparency and reproducibility of studies on the accuracy of LLMs for medical applications.
Materials and Methods:
A systematic PubMed search was conducted to identify articles on LLM performance published in high-ranking clinical medicine journals (the top 10% in each of the 59 specialties according to the 2023 Journal Impact Factor) from November 30, 2022, through June 25, 2024. Data on the six MI-CLEAR-LLM checklist items: 1) identification and specification of the LLM used, 2) stochasticity handling, 3) prompt wording and syntax, 4) prompt structuring, 5) prompt testing and optimization, and 6) independence of the test data—were independently extracted by two reviewers, and adherence was calculated for each item.
Results:
Of 159 studies, 100% (159/159) reported the name of the LLM, 96.9% (154/159) reported the version, and 91.8% (146/159) reported the manufacturer. However, only 54.1% (86/159) reported the training data cutoff date, 6.3% (10/159) documented access to web-based information, and 50.9% (81/159) provided the date of the query attempts. Clear documentation regarding stochasticity management was provided in 15.1% (24/159) of the studies. Regarding prompt details, 49.1% (78/159) provided exact prompt wording and syntax but only 34.0% (54/159) documented prompt-structuring practices. While 46.5% (74/159) of the studies detailed prompt testing, only 15.7% (25/159) explained the rationale for specific word choices. Test data independence was reported for only 13.2% (21/159) of the studies, and 56.6% (43/76) provided URLs for internet-sourced test data.
Conclusion
Although basic LLM identification details were relatively well reported, other key aspects, including stochasticity, prompts, and test data, were frequently underreported. Enhancing adherence to the MI-CLEAR-LLM checklist will allow LLM research to achieve greater transparency and will foster more credible and reliable future studies.
9.Adherence of Studies on Large Language Models for Medical Applications Published in Leading Medical Journals According to the MI-CLEAR-LLM Checklist
Ji Su KO ; Hwon HEO ; Chong Hyun SUH ; Jeho YI ; Woo Hyun SHIM
Korean Journal of Radiology 2025;26(4):304-312
Objective:
To evaluate the adherence of large language model (LLM)-based healthcare research to the Minimum Reporting Items for Clear Evaluation of Accuracy Reports of Large Language Models in Healthcare (MI-CLEAR-LLM) checklist, a framework designed to enhance the transparency and reproducibility of studies on the accuracy of LLMs for medical applications.
Materials and Methods:
A systematic PubMed search was conducted to identify articles on LLM performance published in high-ranking clinical medicine journals (the top 10% in each of the 59 specialties according to the 2023 Journal Impact Factor) from November 30, 2022, through June 25, 2024. Data on the six MI-CLEAR-LLM checklist items: 1) identification and specification of the LLM used, 2) stochasticity handling, 3) prompt wording and syntax, 4) prompt structuring, 5) prompt testing and optimization, and 6) independence of the test data—were independently extracted by two reviewers, and adherence was calculated for each item.
Results:
Of 159 studies, 100% (159/159) reported the name of the LLM, 96.9% (154/159) reported the version, and 91.8% (146/159) reported the manufacturer. However, only 54.1% (86/159) reported the training data cutoff date, 6.3% (10/159) documented access to web-based information, and 50.9% (81/159) provided the date of the query attempts. Clear documentation regarding stochasticity management was provided in 15.1% (24/159) of the studies. Regarding prompt details, 49.1% (78/159) provided exact prompt wording and syntax but only 34.0% (54/159) documented prompt-structuring practices. While 46.5% (74/159) of the studies detailed prompt testing, only 15.7% (25/159) explained the rationale for specific word choices. Test data independence was reported for only 13.2% (21/159) of the studies, and 56.6% (43/76) provided URLs for internet-sourced test data.
Conclusion
Although basic LLM identification details were relatively well reported, other key aspects, including stochasticity, prompts, and test data, were frequently underreported. Enhancing adherence to the MI-CLEAR-LLM checklist will allow LLM research to achieve greater transparency and will foster more credible and reliable future studies.