Innovation in artificial intelligence medical regulatory and governance:thoughts on breaking through the current normative framework
10.12026/j.issn.1001-8565.2024.09.03
- VernacularTitle:人工智能医疗监管与治理创新:突破现行规范框架的思考
- Author:
Bingshu CHEN
1
;
Yue WANG
Author Information
1. 东软医疗系统股份有限公司,辽宁 沈阳 110179
- Keywords:
artificial intelligence generated content;
medical device;
independent software;
computer-aided detection
- From:
Chinese Medical Ethics
2024;37(9):1030-1036
- CountryChina
- Language:Chinese
-
Abstract:
With the explosive growth of artificial intelligence(AI)applications represented by deep learning and artificial intelligence generated content(AIGC),AI technologies have deployed various fields of healthcare,including clinical diagnosis and treatment,hospital management,scientific research,education training,and health management.While bringing efficient and high-quality services to patients,it has also emerged a series of regulatory and governance issues that need to be resolved and explored,especially in the regulation of AI medical applications.Currently,regulatory agencies in various countries mostly regulate AI medical products based on risk levels,positioning AI products as"aided"in nature,that is,physicians are required to sign to confirm the results of the automated interpretation and take legal responsibility.The World Health Organization has released two successive AI guidelines,aiming to provide a guiding framework for the development,provision,and deployment of AI technology in the field of health,and to propose breakthrough"independent use"recommendations for the diagnosis of special diseases such as tuberculosis.The United States regulators have also proposed a creative idea of"software pre-certification"to shorten the approval process and innovate regulatory methods.Starting from the perspective of innovation in the technology governance systems,drawing on relevant international experiences,and combining them with China's technological development and practice situation,this paper proposed a governance thinking of gradually opening up"independent"applications for AI products with specific application scenarios and lower risk levels,as well as conducting"institutional pre-certification"for high-risk AI medical products that have not yet included in regulation,so as to provide some operable paths for institutional innovation.