1.The ethics of using artificial intelligence in medical research
Shinae YU ; Sang-Shin LEE ; Hyunyong HWANG
Kosin Medical Journal 2024;39(4):229-237
The integration of artificial intelligence (AI) technologies into medical research introduces significant ethical challenges that necessitate the strengthening of ethical frameworks. This review highlights the issues of privacy, bias, accountability, informed consent, and regulatory compliance as central concerns. AI systems, particularly in medical research, may compromise patient data privacy, perpetuate biases if they are trained on nondiverse datasets, and obscure accountability owing to their “black box” nature. Furthermore, the complexity of the role of AI may affect patients’ informed consent, as they may not fully grasp the extent of AI involvement in their care. Compliance with regulations such as the Health Insurance Portability and Accountability Act and General Data Protection Regulation is essential, as they address liability in cases of AI errors. This review advocates a balanced approach to AI autonomy in clinical decisions, the rigorous validation of AI systems, ongoing monitoring, and robust data governance. Engaging diverse stakeholders is crucial for aligning AI development with ethical norms and addressing practical clinical needs. Ultimately, the proactive management of AI’s ethical implications is vital to ensure that its integration into healthcare improves patient outcomes without compromising ethical integrity.
2.The ethics of using artificial intelligence in medical research
Shinae YU ; Sang-Shin LEE ; Hyunyong HWANG
Kosin Medical Journal 2024;39(4):229-237
The integration of artificial intelligence (AI) technologies into medical research introduces significant ethical challenges that necessitate the strengthening of ethical frameworks. This review highlights the issues of privacy, bias, accountability, informed consent, and regulatory compliance as central concerns. AI systems, particularly in medical research, may compromise patient data privacy, perpetuate biases if they are trained on nondiverse datasets, and obscure accountability owing to their “black box” nature. Furthermore, the complexity of the role of AI may affect patients’ informed consent, as they may not fully grasp the extent of AI involvement in their care. Compliance with regulations such as the Health Insurance Portability and Accountability Act and General Data Protection Regulation is essential, as they address liability in cases of AI errors. This review advocates a balanced approach to AI autonomy in clinical decisions, the rigorous validation of AI systems, ongoing monitoring, and robust data governance. Engaging diverse stakeholders is crucial for aligning AI development with ethical norms and addressing practical clinical needs. Ultimately, the proactive management of AI’s ethical implications is vital to ensure that its integration into healthcare improves patient outcomes without compromising ethical integrity.
3.The ethics of using artificial intelligence in medical research
Shinae YU ; Sang-Shin LEE ; Hyunyong HWANG
Kosin Medical Journal 2024;39(4):229-237
The integration of artificial intelligence (AI) technologies into medical research introduces significant ethical challenges that necessitate the strengthening of ethical frameworks. This review highlights the issues of privacy, bias, accountability, informed consent, and regulatory compliance as central concerns. AI systems, particularly in medical research, may compromise patient data privacy, perpetuate biases if they are trained on nondiverse datasets, and obscure accountability owing to their “black box” nature. Furthermore, the complexity of the role of AI may affect patients’ informed consent, as they may not fully grasp the extent of AI involvement in their care. Compliance with regulations such as the Health Insurance Portability and Accountability Act and General Data Protection Regulation is essential, as they address liability in cases of AI errors. This review advocates a balanced approach to AI autonomy in clinical decisions, the rigorous validation of AI systems, ongoing monitoring, and robust data governance. Engaging diverse stakeholders is crucial for aligning AI development with ethical norms and addressing practical clinical needs. Ultimately, the proactive management of AI’s ethical implications is vital to ensure that its integration into healthcare improves patient outcomes without compromising ethical integrity.
4.The ethics of using artificial intelligence in medical research
Shinae YU ; Sang-Shin LEE ; Hyunyong HWANG
Kosin Medical Journal 2024;39(4):229-237
The integration of artificial intelligence (AI) technologies into medical research introduces significant ethical challenges that necessitate the strengthening of ethical frameworks. This review highlights the issues of privacy, bias, accountability, informed consent, and regulatory compliance as central concerns. AI systems, particularly in medical research, may compromise patient data privacy, perpetuate biases if they are trained on nondiverse datasets, and obscure accountability owing to their “black box” nature. Furthermore, the complexity of the role of AI may affect patients’ informed consent, as they may not fully grasp the extent of AI involvement in their care. Compliance with regulations such as the Health Insurance Portability and Accountability Act and General Data Protection Regulation is essential, as they address liability in cases of AI errors. This review advocates a balanced approach to AI autonomy in clinical decisions, the rigorous validation of AI systems, ongoing monitoring, and robust data governance. Engaging diverse stakeholders is crucial for aligning AI development with ethical norms and addressing practical clinical needs. Ultimately, the proactive management of AI’s ethical implications is vital to ensure that its integration into healthcare improves patient outcomes without compromising ethical integrity.
5.The ethics of using artificial intelligence in medical research
Shinae YU ; Sang-Shin LEE ; Hyunyong HWANG
Kosin Medical Journal 2024;39(4):229-237
The integration of artificial intelligence (AI) technologies into medical research introduces significant ethical challenges that necessitate the strengthening of ethical frameworks. This review highlights the issues of privacy, bias, accountability, informed consent, and regulatory compliance as central concerns. AI systems, particularly in medical research, may compromise patient data privacy, perpetuate biases if they are trained on nondiverse datasets, and obscure accountability owing to their “black box” nature. Furthermore, the complexity of the role of AI may affect patients’ informed consent, as they may not fully grasp the extent of AI involvement in their care. Compliance with regulations such as the Health Insurance Portability and Accountability Act and General Data Protection Regulation is essential, as they address liability in cases of AI errors. This review advocates a balanced approach to AI autonomy in clinical decisions, the rigorous validation of AI systems, ongoing monitoring, and robust data governance. Engaging diverse stakeholders is crucial for aligning AI development with ethical norms and addressing practical clinical needs. Ultimately, the proactive management of AI’s ethical implications is vital to ensure that its integration into healthcare improves patient outcomes without compromising ethical integrity.
6.Analysis of Acute Myeloid Leukemia in Korean Patients with Sole Trisomy 6.
Shinae YU ; Min Jung KWON ; Seung Tae LEE ; Hee Yeon WOO ; Hyosoon PARK ; Sun Hee KIM
Annals of Laboratory Medicine 2014;34(5):402-404
No abstract available.
Adult
;
Aged
;
Aged, 80 and over
;
Asian Continental Ancestry Group/*genetics
;
Child
;
Chromosomes, Human, Pair 6
;
Female
;
Humans
;
Infant
;
Leukemia, Myeloid, Acute/*genetics/pathology
;
Male
;
Middle Aged
;
Republic of Korea
;
*Trisomy
;
Young Adult
7.Successful Treatment of Fungemia Caused by Cyberlindnera fabianii with Anidulafungin: A Case Report.
Jeong In LEE ; Shinae YU ; Jong Sin PARK ; Eun Jeong JOO ; Jong Hee SHIN ; Min Jung KWON
Annals of Clinical Microbiology 2015;18(3):94-97
Cyberlindnera fabianii (previously known as Hansenula fabianii, Pichia fabianii, and Lindnera fabianii) is a yeast species that forms a biofilm, allowing it to resist azole drugs. In this study, we report a case of fungemia with C. fabianii that was successfully treated with anidulafungin. In this case, the organism was initially misidentified as Candida utilis (with a high probability of 93%, suggesting good identification) using the VITEK 2 yeast identification card (YST ID; bio-Merieux, USA). The species responsible for the patient's fungemia was correctly identified after sequencing the internally transcribed spacer region and the D1/D2 domain of the large subunit (26S) rDNA gene. The CLSI M27-A3 broth microdilution method was used to determine the in vitro antifungal activity of anidulafungin and fluconazole against C. fabianii. The MICs of anidulafungin and fluconazole were found to be 0.03 microg/mL and 2 microg/mL, respectively. The patient recovered after 14 days of anidulafungin treatment.
Biofilms
;
Candida
;
Danazol
;
DNA, Ribosomal
;
Fluconazole
;
Fungemia*
;
Humans
;
Pichia
;
Yeasts
8.A Questionnaire Survey of HLA Crossmatch Tests in Korea (2015).
Shinae YU ; Eun Suk KANG ; Myoung Hee PARK
Laboratory Medicine Online 2017;7(3):147-156
BACKGROUND: We carried out a questionnaire survey for laboratories performing human leukocyte antigen-crossmatch (HLA-XM) to provide a basis for laboratory standardization of HLA-XM tests in Korea. METHODS: The questionnaires were distributed to 51 HLA laboratories participating in the HLA-XM part of the HLA proficiency survey program organized by the Korean Society for Laboratory Medicine and replies from 50 laboratories were analyzed. The questionnaires included following items: 1) HLA-XM methods performed and annual number of tests, 2) types of the specimen and lymphocyte separation methods, 3) test procedures and reagents for complement-dependent cytotoxicity crossmatch (CDC-XM) and flow cytometry crossmatch (FCXM). RESULTS: The number of laboratories performing anti-human globulin (AHG) CDC-XM (47/49, 96%) and FCXM (30/50, 60%) was considerably increased compared to the 2005 survey (AHG CDC-XM, 35/43, 81%; FCXM, 7/44, 16%). As for the annual number of XM tests, more than 50% of the laboratories were low volume laboratories performing ≤50 tests, and only 10% of the laboratories were performing >500 tests. For cell isolation methods, negative selection was used by 43% (21/49) of laboratories performing CDC-XM. Number of cells reacted per 1 µL of serum varied among different laboratories in both CDC-XM (1,000–8,000) and FCXM tests (1,300-20,000). For the interpretation of FCXM, log fluorescence ratio (26/30, 87%) was more commonly used than channel shift values (5/30, 17%). CONCLUSIONS: Considerable variation is noted in both CDC-XM and FCXM methods performed by different laboratories. A continuous effort for laboratory standardization is needed to reduce inter-laboratory variation in the HLA-XM test results.
Cell Separation
;
Flow Cytometry
;
Fluorescence
;
Humans
;
Indicators and Reagents
;
Korea*
;
Leukocytes
;
Lymphocytes
9.Performance Evaluation of MEDITAPE UC-11A Strip Test in Estimating the Urine Albumin-to-Creatinine Ratio and Urine Protein-to-Creatinine Ratio
Shinae YU ; Sae Am SONG ; Kyung Ran JUN ; Jeong Nyeo LEE
Laboratory Medicine Online 2020;10(1):52-57
30 mg/g), the concordance rate, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of UACR, analyzed using MEDITAPE UC-11A, were 80.5, 97.5, 67.0, 70.3, and 97.1%, respectively. Using UPCR, analyzed via quantitative assay, as a reference to estimate proteinuria (UPCR >0.15 g/g), the concordance rate, sensitivity, specificity, PPV, and NPV of UPCR, analyzed using MEDITAPE UC-11A, were 86.7, 94.4, 81.5, 77.6, and 95.6%, respectively.CONCLUSIONS: UACR and UPCR, analyzed using MEDITAPE UC-11A, exhibited relatively high sensitivity and NPV, which is beneficial for laboratory screening for both albuminuria and proteinuria.]]>
Albuminuria
;
Chronic Disease
;
Humans
;
Hypertension
;
Kidney Diseases
;
Mass Screening
;
Proteinuria
;
Renal Insufficiency, Chronic
;
Sensitivity and Specificity
10.Comparison of Two Automated Immunoassays for the Detection of Anti-Hepatitis A Virus Total Immunoglobulin and IgM.
Sang Yong SHIN ; Hyun Jin LIM ; Changmin YI ; Shinae YU ; Min Jung KWON ; Hyosoon PARK ; Young Jae KIM ; Chae Lim JUNG ; Hee Yeon WOO
Journal of Laboratory Medicine and Quality Assurance 2011;33(2):103-109
BACKGROUND: The detection of total anti-hepatitis A virus (anti-HAV) immunoglobulin (Ig) and IgM is important for diagnosing acute hepatitis A. Our laboratory introduced new commercial automated chemiluminescence immunoassays (CLIAs) for use in addition to pre-existing automated CLIA. We evaluated the rate of agreement in the detection of total anti-HAV Ig and IgM in serum samples between two automated CLIAs. METHODS: We analyzed 181 samples those were submitted for testing at Kangbuk Samsung Medical Center. We analyzed the rate of agreement between the ADVIA Centaur XP (Siemens, Germany) and the MODULAR ANALYTICS E170 (Roche, Switzerland) analyzers. We performed reverse transcription (RT)-PCR when there was a discrepancy between the results from the two analyzers. RESULTS: The agreement rates between the ADVIA Centaur XP and the MODULAR ANALYTICS E170 for total anti-HAV Ig and IgM were 97.2% and 98.9%, respectively. Discrepant results were obtained in seven cases; all were found to be HAV-negative based on RT-PCR analysis. CONCLUSIONS: The total anti-HAV Ig and IgM results obtained using the two automated analyzers were comparable. However, in cases of equivocal results tested by the ADVIA Centaur XP for anti-HAV IgM, retesting and follow-up testing of samples are recommended.
Hepatitis A
;
Hepatitis A Antibodies
;
Hepatitis A virus
;
Immunoassay
;
Immunoglobulin M
;
Immunoglobulins
;
Luminescence
;
Reverse Transcription
;
Viruses