Analysis and comparison of artificial and artificial intelligence in diabetic fundus photography
10.3760/cma.j.cn511434-20200915-00452
- VernacularTitle:糖尿病患者眼底照相人工与人工智能分析结果比较
- Author:
Fengyu WU
;
Xialian LI
- From:
Chinese Journal of Ocular Fundus Diseases
2021;37(1):27-31
- CountryChina
- Language:Chinese
-
Abstract:
Objective:To compare the consistency of artificial analysis and artificial intelligence analysis in the identification of fundus lesions in diabetic patients.Methods:A retrospective study. From May 2018 to May 2019, 1 053 consecutive diabetic patients (2 106 eyes) of the endocrinology department of the First Affiliated Hospital of Zhengzhou University were included in the study. Among them, 888 patients were males and 165 were females. They were 20-70 years old, with an average age of 53 years old. All patients were performed fundus imaging on diabetic Inspection by useing Japanese Kowa non-mydriatic fundus cameras. The artificial intelligence analysis of Shanggong's ophthalmology cloud network screening platform automatically detected diabetic retinopathy (DR) such as exudation, bleeding, and microaneurysms, and automatically classifies the image detection results according to the DR international staging standard. Manual analysis was performed by two attending physicians and reviewed by the chief physician to ensure the accuracy of manual analysis. When differences appeared between the analysis results of the two analysis methods, the manual analysis results shall be used as the standard. Consistency rate were calculated and compared. Consistency rate = (number of eyes with the same diagnosis result/total number of effective eyes collected) × 100%. Kappa consistency test was performed on the results of manual analysis and artificial intelligence analysis, 0.0≤ κ<0.2 was a very poor degree of consistency, 0.2≤ κ<0.4 meant poor consistency, 0.4≤ κ<0.6 meant medium consistency, and 0.6≤ κ<1.0 meant good consistency. Results:Among the 2 106 eyes, 64 eyes were excluded that cannot be identified by artificial intelligence due to serious illness, 2042 eyes were finally included in the analysis. The results of artificial analysis and artificial intelligence analysis were completely consistent with 1835 eyes, accounting for 89.86%. There were differences in analysis of 207 eyes, accounting for 10.14%. The main differences between the two are as follows: (1) Artificial intelligence analysis points Bleeding, oozing, and manual analysis of 96 eyes (96/2042, 4.70%); (2) Artificial intelligence analysis of drusen, and manual analysis of 71 eyes (71/2042, 3.48%); (3) Artificial intelligence analyzes normal or vitreous degeneration, while manual analysis of punctate exudation or hemorrhage or microaneurysms in 40 eyes (40/2042, 1.95%). The diagnostic rates for non-DR were 23.2% and 20.2%, respectively. The diagnostic rates for non-DR were 76.8% and 79.8%, respectively. The accuracy of artificial intelligence interpretation is 87.8%. The results of the Kappa consistency test showed that the diagnostic results of manual analysis and artificial intelligence analysis were moderately consistent ( κ=0.576, P<0.01). Conclusions:Manual analysis and artificial intelligence analysis showed moderate consistency in the diagnosis of fundus lesions in diabetic patients. The accuracy of artificial intelligence interpretation is 87.8%.