6.Machine Learning-Based Prediction of Korean Triage and Acuity Scale Level in Emergency Department Patients
Sae Won CHOI ; Taehoon KO ; Ki Jeong HONG ; Kyung Hwan KIM
Healthcare Informatics Research 2019;25(4):305-312
OBJECTIVES: Triage is a process to accurately assess and classify symptoms to identify and provide rapid treatment to patients. The Korean Triage and Acuity Scale (KTAS) is used as a triage instrument in all emergency centers. The aim of this study was to train and compare machine learning models to predict KTAS levels. METHODS: This was a cross-sectional study using data from a single emergency department of a tertiary university hospital. Information collected during triage was used in the analysis. Logistic regression, random forest, and XGBoost were used to predict the KTAS level. RESULTS: The models with the highest area under the receiver operating characteristic curve (AUROC) were the random forest and XGBoost models trained on the entire dataset (AUROC = 0.922, 95% confidence interval 0.917–0.925 and AUROC = 0.922, 95% confidence interval 0.918–0.925, respectively). The AUROC of the models trained on the clinical data was higher than that of models trained on text data only, but the models trained on all variables had the highest AUROC among similar machine learning models. CONCLUSIONS: Machine learning can robustly predict the KTAS level at triage, which may have many possibilities for use, and the addition of text data improves the predictive performance compared to that achieved by using structured data alone.
Cross-Sectional Studies
;
Dataset
;
Emergencies
;
Emergency Service, Hospital
;
Forests
;
Humans
;
Logistic Models
;
Machine Learning
;
Natural Language Processing
;
ROC Curve
;
Triage
7.Explainability Enhanced Machine Learning Model for Classifying Intellectual Disability and AttentionDeficit/Hyperactivity Disorder With Psychological Test Reports
Tong Min KIM ; Young-Hoon KIM ; Sung-Hee SONG ; In-Young CHOI ; Dai-Jin KIM ; Taehoon KO
Journal of Korean Medical Science 2025;40(11):e26-
Background:
Psychological test reports are essential in assessing intellectual functioning, aiding in diagnosing and treating intellectual disability (ID) and attention-deficit/ hyperactivity disorder (ADHD). However, these reports can have several problems because they are diverse, unstructured, subjective, and involve human errors. Additionally, physicians often do not read the entire report, and the number of reports is lower than that of diagnoses.
Methods:
We developed explainable predictive models for classifying IDs and ADHDs based on written reports to address these issues. The reports of 1,475 patients with IDs and ADHDs who underwent intelligence tests were used for the models. These models were developed by analyzing reports using natural language processing (NLP) and incorporating the physician’s diagnosis for each report. We selected n-gram features from the models’ results by extracting important features using SHapley Additive exPlanations and permutation importance to make the models explainable. Developing the n-gram feature-based original text search system compensated for the lack of human readability caused by NLP and enabled the reconstruction of human-readable texts from the selected n-gram features.
Results:
The maximum model accuracy was 0.92, and the 80 human-readable texts were restored from four models.
Conclusion
The results showed that the models could accurately classify IDs and ADHDs, even with a few reports. The models were also able to explain their predictions. The explainability-enhanced model can help physicians understand the classification process of IDs and ADHDs and provide evidence-based insights.
8.Explainability Enhanced Machine Learning Model for Classifying Intellectual Disability and AttentionDeficit/Hyperactivity Disorder With Psychological Test Reports
Tong Min KIM ; Young-Hoon KIM ; Sung-Hee SONG ; In-Young CHOI ; Dai-Jin KIM ; Taehoon KO
Journal of Korean Medical Science 2025;40(11):e26-
Background:
Psychological test reports are essential in assessing intellectual functioning, aiding in diagnosing and treating intellectual disability (ID) and attention-deficit/ hyperactivity disorder (ADHD). However, these reports can have several problems because they are diverse, unstructured, subjective, and involve human errors. Additionally, physicians often do not read the entire report, and the number of reports is lower than that of diagnoses.
Methods:
We developed explainable predictive models for classifying IDs and ADHDs based on written reports to address these issues. The reports of 1,475 patients with IDs and ADHDs who underwent intelligence tests were used for the models. These models were developed by analyzing reports using natural language processing (NLP) and incorporating the physician’s diagnosis for each report. We selected n-gram features from the models’ results by extracting important features using SHapley Additive exPlanations and permutation importance to make the models explainable. Developing the n-gram feature-based original text search system compensated for the lack of human readability caused by NLP and enabled the reconstruction of human-readable texts from the selected n-gram features.
Results:
The maximum model accuracy was 0.92, and the 80 human-readable texts were restored from four models.
Conclusion
The results showed that the models could accurately classify IDs and ADHDs, even with a few reports. The models were also able to explain their predictions. The explainability-enhanced model can help physicians understand the classification process of IDs and ADHDs and provide evidence-based insights.
9.Explainability Enhanced Machine Learning Model for Classifying Intellectual Disability and AttentionDeficit/Hyperactivity Disorder With Psychological Test Reports
Tong Min KIM ; Young-Hoon KIM ; Sung-Hee SONG ; In-Young CHOI ; Dai-Jin KIM ; Taehoon KO
Journal of Korean Medical Science 2025;40(11):e26-
Background:
Psychological test reports are essential in assessing intellectual functioning, aiding in diagnosing and treating intellectual disability (ID) and attention-deficit/ hyperactivity disorder (ADHD). However, these reports can have several problems because they are diverse, unstructured, subjective, and involve human errors. Additionally, physicians often do not read the entire report, and the number of reports is lower than that of diagnoses.
Methods:
We developed explainable predictive models for classifying IDs and ADHDs based on written reports to address these issues. The reports of 1,475 patients with IDs and ADHDs who underwent intelligence tests were used for the models. These models were developed by analyzing reports using natural language processing (NLP) and incorporating the physician’s diagnosis for each report. We selected n-gram features from the models’ results by extracting important features using SHapley Additive exPlanations and permutation importance to make the models explainable. Developing the n-gram feature-based original text search system compensated for the lack of human readability caused by NLP and enabled the reconstruction of human-readable texts from the selected n-gram features.
Results:
The maximum model accuracy was 0.92, and the 80 human-readable texts were restored from four models.
Conclusion
The results showed that the models could accurately classify IDs and ADHDs, even with a few reports. The models were also able to explain their predictions. The explainability-enhanced model can help physicians understand the classification process of IDs and ADHDs and provide evidence-based insights.
10.Explainability Enhanced Machine Learning Model for Classifying Intellectual Disability and AttentionDeficit/Hyperactivity Disorder With Psychological Test Reports
Tong Min KIM ; Young-Hoon KIM ; Sung-Hee SONG ; In-Young CHOI ; Dai-Jin KIM ; Taehoon KO
Journal of Korean Medical Science 2025;40(11):e26-
Background:
Psychological test reports are essential in assessing intellectual functioning, aiding in diagnosing and treating intellectual disability (ID) and attention-deficit/ hyperactivity disorder (ADHD). However, these reports can have several problems because they are diverse, unstructured, subjective, and involve human errors. Additionally, physicians often do not read the entire report, and the number of reports is lower than that of diagnoses.
Methods:
We developed explainable predictive models for classifying IDs and ADHDs based on written reports to address these issues. The reports of 1,475 patients with IDs and ADHDs who underwent intelligence tests were used for the models. These models were developed by analyzing reports using natural language processing (NLP) and incorporating the physician’s diagnosis for each report. We selected n-gram features from the models’ results by extracting important features using SHapley Additive exPlanations and permutation importance to make the models explainable. Developing the n-gram feature-based original text search system compensated for the lack of human readability caused by NLP and enabled the reconstruction of human-readable texts from the selected n-gram features.
Results:
The maximum model accuracy was 0.92, and the 80 human-readable texts were restored from four models.
Conclusion
The results showed that the models could accurately classify IDs and ADHDs, even with a few reports. The models were also able to explain their predictions. The explainability-enhanced model can help physicians understand the classification process of IDs and ADHDs and provide evidence-based insights.