Effects of Expert-Determined Reference Standards in Evaluating the Diagnostic Performance of a Deep Learning Model: A Malignant Lung Nodule Detection Task on Chest Radiographs
- Author:
Jung Eun HUH
1
;
Jong Hyuk LEE
;
Eui Jin HWANG
;
Chang Min PARK
Author Information
- Publication Type:Original Article
- From:Korean Journal of Radiology 2023;24(2):155-165
- CountryRepublic of Korea
- Language:English
-
Abstract:
Objective:Little is known about the effects of using different expert-determined reference standards when evaluating the performance of deep learning-based automatic detection (DLAD) models and their added value to radiologists. We assessed the concordance of expert-determined standards with a clinical gold standard (herein, pathological confirmation) and the effects of different expert-determined reference standards on the estimates of radiologists’ diagnostic performance to detect malignant pulmonary nodules on chest radiographs with and without the assistance of a DLAD model.
Materials and Methods:This study included chest radiographs from 50 patients with pathologically proven lung cancer and 50 controls. Five expert-determined standards were constructed using the interpretations of 10 experts: individual judgment by the most experienced expert, majority vote, consensus judgments of two and three experts, and a latent class analysis (LCA) model. In separate reader tests, additional 10 radiologists independently interpreted the radiographs and then assisted with the DLAD model. Their diagnostic performance was estimated using the clinical gold standard and various expertdetermined standards as the reference standard, and the results were compared using the t test with Bonferroni correction.
Results:The LCA model (sensitivity, 72.6%; specificity, 100%) was most similar to the clinical gold standard. When expertdetermined standards were used, the sensitivities of radiologists and DLAD model alone were overestimated, and their specificities were underestimated (all p-values < 0.05). DLAD assistance diminished the overestimation of sensitivity but exaggerated the underestimation of specificity (all p-values < 0.001). The DLAD model improved sensitivity and specificity to a greater extent when using the clinical gold standard than when using the expert-determined standards (all p-values < 0.001), except for sensitivity with the LCA model (p = 0.094).
Conclusion:The LCA model was most similar to the clinical gold standard for malignant pulmonary nodule detection on chest radiographs. Expert-determined standards caused bias in measuring the diagnostic performance of the artificial intelligence model.