Research progress on ethical issues and regulatory pathways of large language models in clinical applications
10.12025/j.issn.1008-6358.2025.20251326
- VernacularTitle:大语言模型在临床应用中的伦理问题与监管路径研究进展
- Author:
Xinyao WANG
1
;
Lingyi TANG
1
Author Information
1. Information Center, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China.
- Publication Type:AI4M
- Keywords:
large language model;
clinical application;
artificial intelligence;
ethics;
regulation
- From:
Chinese Journal of Clinical Medicine
2026;33(1):24-30
- CountryChina
- Language:Chinese
-
Abstract:
Large language models (LLM) are increasingly applied in the medical field, yet their clinical implementation faces numerous ethical and regulatory challenges. This paper reviews seven major ethical challenges: patient safety and accuracy, bias and fairness, privacy and data protection, transparency and explainability, accountability and legal liability, patient autonomy and informed consent, and the doctor-patient relationship and trust. At the regulatory level, international research indicates that United States currently lacks specific regulations for medical LLM use, while is exploring the regulation of high-risk LLM. The EU’s AI Act classifies medical AI as high-risk and imposes stringent compliance requirements. China has issued generative AI management measures and advocates industry standards, though its legal framework remains incomplete. Solutions include embedding ethical principles during model development, strengthening human-machine collaboration and manual oversight in clinical settings, establishing clear legal standards for accountability, safeguarding data privacy and security, implementing continuous monitoring and improvement, and deepening international cooperation and multidisciplinary governance.