1.In Vitro Diagnostics Certification for Creatinine Assays in Korea over 7 Years: Achievements and Future Outlook
Eun-Jung CHO ; Joonsang YU ; Jeayeon RYU ; Jiwoo SEO ; Hyunae LEE ; Chan-Ik CHO ; Tae-Dong JEONG ; Sollip KIM ; Woochang LEE ; Sail CHUN ; Won-Ki MIN
Annals of Laboratory Medicine 2025;45(5):493-502
Background:
An international reference measurement laboratory network for creatinine (Cr) is lacking; therefore, Korea developed an independent evaluation and certification system. The in vitro diagnostics (IVD) certification program, launched in 2017, formed part of a broader Cr standardization initiative intended to enhance accuracy at the manufacturing stage.
Methods:
The program was designed to evaluate analytical systems, including all reagent lots, calibrators, and instrument models, twice annually. Bias, imprecision, total error (TE), and linearity were evaluated based on established acceptance criteria. A post-certification process allows submission for a second challenge and validation of corrective actions.
Results:
Between 2017 and 2023, 489 analytical systems were evaluated. Average acceptance rates for bias, imprecision, TE, and linearity were 70.8%, 95.9%, 87.7%, and 87.8%, respectively. The lowest acceptance rate for bias evaluation was 8.7% for the kinetic Jaffe method without compensation in 2018. Over the 7-year period, the mean absolute percentage bias (absBias%), coefficient of variation (CV), and TE were 4.62%, 1.37%,and 7.29%, respectively. The highest absBias% (7.94%) was observed in the 0.0 ≤ Cr < 1.0target value range. Since 2019, a consistent reduction in absBias% has been observed.
Conclusions
This program is a pioneering response to the absence of a global certification program for Cr assays. It offers significant advantages, including comprehensive evaluations, fee-free participation, and a robust post-certification process. Continuous participation and improvement efforts by manufacturers have contributed to enhanced accuracy in Cr assays.
2.Clinical significance and outcomes of adult living donor liver transplantation for acute liver failure: a retrospective cohort study based on 15-year single-center experience
Geun-hyeok YANG ; Young-In YOON ; Shin HWANG ; Ki-Hun KIM ; Chul-Soo AHN ; Deok-Bog MOON ; Tae-Yong HA ; Gi-Won SONG ; Dong-Hwan JUNG ; Gil-Chun PARK ; Sung-Gyu LEE
Annals of Surgical Treatment and Research 2024;107(3):167-177
Purpose:
This study aimed to describe adult living donor liver transplantation (LDLT) for acute liver failure and evaluate its clinical significance by comparing its surgical and survival outcomes with those of deceased donor liver transplantation (DDLT).
Methods:
We retrospectively reviewed the medical records of 267 consecutive patients (161 LDLT recipients and 106 DDLT recipients) aged 18 years or older who underwent liver transplantation between January 2006 and December 2020.
Results:
The mean periods from hepatic encephalopathy to liver transplantation were 5.85 days and 8.35 days for LDLT and DDLT, respectively (P = 0.091). Among these patients, 121 (45.3%) had grade III or IV hepatic encephalopathy (living, 34.8% vs. deceased, 61.3%; P < 0.001), and 38 (14.2%) had brain edema (living, 16.1% vs. deceased, 11.3%; P = 0.269) before liver transplantation. There were no significant differences in in-hospital mortality (living, 11.8% vs. deceased, 15.1%; P = 0.435), 10-year overall survival (living, 90.8% vs. deceased, 84.0%; P = 0.096), and graft survival (living, 83.5% vs. deceased, 71.3%;P = 0.051). However, postoperatively, the mean intensive care unit stay was shorter in the LDLT group (5.0 days vs. 9.5 days, P < 0.001). In-hospital mortality was associated with vasopressor use (odds ratio [OR], 3.40; 95% confidence interval [CI], 1.45–7.96; P = 0.005) and brain edema (OR, 2.75; 95% CI, 1.16–6.52; P = 0.022) of recipient at the time of transplantation. However, LDLT (OR, 1.26; 95% CI, 0.59–2.66; P = 0.553) was not independently associated with in-hospital mortality.
Conclusion
LDLT is feasible for acute liver failure when organs from deceased donors are not available.
3.Estimation of Attributable Risk and Direct Medical and Non-Medical Costs of Major Mental Disorders Associated With Air Pollution Exposures Among Children and Adolescents in the Republic of Korea, 2011–2019
Yae Won HA ; Tae Hyun KIM ; Dae Ryong KANG ; Ki-Soo PARK ; Dong Chun SHIN ; Jaelim CHO ; Changsoo KIM
Journal of Korean Medical Science 2024;39(30):e218-
Background:
Recent studies have reported the burden of attention deficit hyperactivity disorder [ADHD], autism spectrum disorder [ASD], and depressive disorder. Also, there is mounting evidence on the effects of environmental factors, such as ambient air pollution, on these disorders among children and adolescents. However, few studies have evaluated the burden of mental disorders attributable to air pollution exposure in children and adolescents.
Methods:
We estimated the risk ratios of major mental disorders (ADHD, ASD, and depressive disorder) associated with air pollutants among children and adolescents using time-series data (2011–2019) obtained from a nationwide air pollution monitoring network and healthcare utilization claims data in the Republic of Korea. Based on the estimated risk ratios, we determined the population attributable fraction (PAF) and calculated the medical costs of major mental disorders attributable to air pollution.
Results:
A total of 33,598 patients were diagnosed with major mental disorders during 9 years. The PAFs for all the major mental disorders were estimated at 6.9% (particulate matter < 10 μm [PM10 ]), 3.7% (PM2.5 ), and 2.2% (sulfur dioxide [SO2 ]). The PAF of PM10 was highest for depressive disorder (9.2%), followed by ASD (8.4%) and ADHD (5.2%). The direct medical costs of all major mental disorders attributable to PM10 and SO2 decreased during the study period.
Conclusion
This study assessed the burden of major mental disorders attributable to air pollution exposure in children and adolescents. We found that PM10, PM2.5 , and SO2 attributed 7%, 4%, and 2% respectively, to the risk of major mental disorders among children and adolescents.
4.Pre-hospital Korean Triage and Acuity Scale: the results of first and second pilot projects
Changshin KANG ; Han Joo CHOI ; Sang-Il KIM ; Yong Oh KIM ; Jung-Youn KIM ; Jungho KIM ; Hyun NOH ; Hyun Ho RYU ; Jung Hee WEE ; Gyuuk HWANG ; Ki Jeong HONG ; Jae Yun AHN ; Chun Song YOUN ; Eunsil KO ; Minhee LEE ; Sung-keun KO ; Tae Young LEE ; Eul Hee ROH ; Joonbum PARK
Journal of the Korean Society of Emergency Medicine 2024;35(1):6-15
While the Korean Triage and Acuity Scale (KTAS) was introduced in 2016 as a tool to identify patients at risk of catastrophic events, including death in the ED, the triage system for the pre-hospital stage still lacks evidence. The pre-hospital stage is characterized by time-sensitive and complex scenarios, where rapid and accurate decision-making is paramount to optimize patient outcomes. Despite the vital role of pre-hospital care providers, the invalidated and subjective current triage system consisting of 4-stages is still used at the pre-hospital stage, and hence, it needs to be modified to be more objective, standardized, and reliable. To improve the Korean emergency medical system, the pre-hospital KTAS (Pre-KTAS) was developed in 2020, and then two pilot projects were conducted in 2022 and 2023. This paper not only reveals the results of the first and second pilot projects for Pre-KTAS but also highlights the potential benefits of using this newly developed triage tool in the pre-hospital setting. Furthermore, this paper suggests ways to improve the emergency medical system (EMS) in Korea by improving patient safety, resource allocation, and overall emergency response efficiency.
5.Impact of emergency room occupancy on the timing of antibiotic administration in patients with septic shock who visited the emergency room
Taek Kyu NAM ; Ji Ho RYU ; Mun ki MIN ; Daesup LEE ; Mose CHUN ; Seung Woo SON ; Yang Wook TAE ; Minjee LEE
Journal of the Korean Society of Emergency Medicine 2024;35(3):212-222
Objective:
The emergency department (ED) serves as the initial point of contact for many sepsis patients, but crowding can affect the timely delivery of essential interventions, such as antibiotics. This paper explores the relationship between antibiotics administration and ED crowding in the context of sepsis management.
Methods:
This single-center study at a tertiary care hospital included adult patients aged 18 and above who visited the emergency department from January 2018 to December 2022. Patients showing signs of septic shock upon arrival were selected as the study population. This study examined factors such as emergency department occupancy, antibiotic administration time, and their correlation with timely antibiotic treatment.
Results:
This study of 839 adult patients with septic shock found a weak correlation (P=0.107) between the time to antibiotic administration and department occupancy. Delayed antibiotic administration was observed when the occupancy exceeded 100%. On the other hand, there was no significant correlation between antibiotic administration within one hour and department occupancy.
Conclusion
Various factors, such as ED bed occupancy, medical staffing, resource allocation, and patient acuity, must be considered when comprehensively evaluating the impact of ED overcrowding on treating septic shock and other conditions.
6.Compositional changes in fecal microbiota in a new Parkinson’s disease model:C57BL/6‑Tg(NSE‑haSyn) mice
Ji Eun KIM ; Ki Chun KWON ; You Jeong JIN ; Ayun SEOL ; Hee Jin SONG ; Yu Jeong ROH ; Tae Ryeol KIM ; Eun Seo PARK ; Gi Ho PARK ; Ji Won PARK ; Young Suk JUNG ; Joon Yong CHO ; Dae Youn HWANG
Laboratory Animal Research 2023;39(4):371-384
Background:
The gut–brain axis (GBA) in Parkinson’s disease (PD) has only been investigated in limited mice models despite dysbiosis of the gut microbiota being considered one of the major treatment targets for neurodegenerative disease. Therefore, this study examined the compositional changes of fecal microbiota in novel transgenic (Tg) mice overexpressing human α-synuclein (hαSyn) proteins under the neuron-specific enolase (NSE) to analyze the potential as GBA model.
Results:
The expression level of the αSyn proteins was significantly higher in the substantia nigra and striatum of NSEhαSyn Tg mice than the Non-Tg mice, while those of tyrosine hydroxylase (TH) were decreased in the same group. In addition, a decrease of 72.7% in the fall times and a 3.8-fold increase in the fall number was detected in NSE-hαSyn Tg mice. The villus thickness and crypt length on the histological structure of the gastrointestinal (GI) tract decreased in NSE-hαSyn Tg mice. Furthermore, the NSE-hαSyn Tg mice exhibited a significant increase in 11 genera, including Scatolibacter, Clostridium, Feifania, Lachnoclostridium, and Acetatifactor population, and a decrease in only two genera in Ligilactobacillus and Sangeribacter population during enhancement of microbiota richness and diversity.
Conclusions
The motor coordination and balance dysfunction of NSE-hαSyn Tg mice may be associated with compositional changes in gut microbiota. In addition, these mice have potential as a GBA model.
8.Is the shock index a useful tool in trauma patients with alcohol ingestion?
Si Hong PARK ; Il Jae WANG ; Youngmo CHO ; Wook Tae YANG ; Seok-Ran YEOM ; Dae Sup LEE ; Mun Ki MIN ; Mose CHUN ; Up HUH ; Chan-Hee SONG ; Yeaeun KIM
Journal of the Korean Society of Emergency Medicine 2023;34(5):421-428
Objective:
Alcohol consumption is a frequent risk factor for trauma. The shock index is widely used to predict the prognosis of trauma, and alcohol can influence the shock index in several ways. This study investigated the usefulness of the shock index in trauma patients who had ingested alcohol.
Methods:
This was a retrospective, observational, single-center study. We performed a logistic regression analysis to assess the association between alcohol consumption and massive transfusions. A receiver operating characteristic (ROC) curve was constructed to determine the predictive value of the shock index for patients who had ingested alcohol.
Results:
A total of 5,128 patients were included in the study. The alcohol-positive group had lower systolic blood pressure and higher heart rate; consequently, the shock index in this group was higher. There was no significant difference between the proportion of the alcohol-positive and alcohol-negative groups who underwent massive transfusions and suffered hospital mortality compared to the overall proportion of patients who underwent massive transfusion based on the shock index. In the logistic regression analysis, the alcohol-negative group showed higher odds ratios for massive transfusions compared to the alcohol-positive group. The area under the ROC curve for predicting massive transfusion was 0.831 for the alcohol-positive group and 0.825 for the alcohol-negative group. However, when a cutoff value of 1 was used, the false positive rate was significantly higher in the alcohol-positive group.
Conclusion
The shock index is a useful tool for predicting outcomes in patients with trauma. However, in patients who have ingested alcohol, the shock index should be interpreted with caution.
9.Performance of a Novel CT-Derived Fractional Flow Reserve Measurement to Detect Hemodynamically Significant Coronary Stenosis
Si-Hyuck KANG ; Soo-Hyun KIM ; Sun-Hwa KIM ; Eun Ju CHUN ; Woo-Young CHUNG ; Chang-Hwan YOON ; Sang-Don PARK ; Chang-Wook NAM ; Ki-Hwan KWON ; Joon-Hyung DOH ; Young-Sup BYUN ; Jang-Whan BAE ; Tae-Jin YOUN ; In-Ho CHAE
Journal of Korean Medical Science 2023;38(32):e254-
Background:
Fractional flow reserve (FFR) based on computed tomography (CT) has been shown to better identify ischemia-causing coronary stenosis. However, this current technology requires high computational power, which inhibits its widespread implementation in clinical practice. This prospective, multicenter study aimed at validating the diagnostic performance of a novel simple CT based fractional flow reserve (CT-FFR) calculation method in patients with coronary artery disease.
Methods:
Patients who underwent coronary CT angiography (CCTA) within 90 days and invasive coronary angiography (ICA) were prospectively enrolled. A hemodynamically significant lesion was defined as an FFR ≤ 0.80, and the area under the receiver operating characteristic curve (AUC) was the primary measure. After the planned analysis for the initial algorithm A, we performed another set of exploratory analyses for an improved algorithm B.
Results:
Of 184 patients who agreed to participate in the study, 151 were finally analyzed.Hemodynamically significant lesions were observed in 79 patients (52.3%). The AUC was 0.71 (95% confidence interval [CI], 0.63–0.80) for CCTA, 0.65 (95% CI, 0.56–0.74) for CT-FFR algorithm A (P = 0.866), and 0.78 (95% CI, 0.70–0.86) for algorithm B (P = 0.112). Diagnostic accuracy was 0.63 (0.55–0.71) for CCTA alone, 0.66 (0.58–0.74) for algorithm A, and 0.76 (0.68–0.82) for algorithm B.
Conclusion
This study suggests the feasibility of automated CT-FFR, which can be performed on-site within several hours. However, the diagnostic performance of the current algorithm does not meet the a priori criteria for superiority. Future research is required to improve the accuracy.

Result Analysis
Print
Save
E-mail