- Author:
Shinae YU
1
;
Sang-Shin LEE
;
Hyunyong HWANG
Author Information
- Publication Type:Review article
- From:Kosin Medical Journal 2024;39(4):229-237
- CountryRepublic of Korea
- Language:English
- Abstract: The integration of artificial intelligence (AI) technologies into medical research introduces significant ethical challenges that necessitate the strengthening of ethical frameworks. This review highlights the issues of privacy, bias, accountability, informed consent, and regulatory compliance as central concerns. AI systems, particularly in medical research, may compromise patient data privacy, perpetuate biases if they are trained on nondiverse datasets, and obscure accountability owing to their “black box” nature. Furthermore, the complexity of the role of AI may affect patients’ informed consent, as they may not fully grasp the extent of AI involvement in their care. Compliance with regulations such as the Health Insurance Portability and Accountability Act and General Data Protection Regulation is essential, as they address liability in cases of AI errors. This review advocates a balanced approach to AI autonomy in clinical decisions, the rigorous validation of AI systems, ongoing monitoring, and robust data governance. Engaging diverse stakeholders is crucial for aligning AI development with ethical norms and addressing practical clinical needs. Ultimately, the proactive management of AI’s ethical implications is vital to ensure that its integration into healthcare improves patient outcomes without compromising ethical integrity.