1.Real-time semantic segmentation of gastric intestinal metaplasia using a deep learning approach
Vitchaya SIRIPOPPOHN ; Rapat PITTAYANON ; Kasenee TIANKANON ; Natee FAKNAK ; Anapat SANPAVAT ; Naruemon KLAIKAEW ; Peerapon VATEEKUL ; Rungsun RERKNIMITR
Clinical Endoscopy 2022;55(3):390-400
Background/Aims:
Previous artificial intelligence (AI) models attempting to segment gastric intestinal metaplasia (GIM) areas have failed to be deployed in real-time endoscopy due to their slow inference speeds. Here, we propose a new GIM segmentation AI model with inference speeds faster than 25 frames per second that maintains a high level of accuracy.
Methods:
Investigators from Chulalongkorn University obtained 802 histological-proven GIM images for AI model training. Four strategies were proposed to improve the model accuracy. First, transfer learning was employed to the public colon datasets. Second, an image preprocessing technique contrast-limited adaptive histogram equalization was employed to produce clearer GIM areas. Third, data augmentation was applied for a more robust model. Lastly, the bilateral segmentation network model was applied to segment GIM areas in real time. The results were analyzed using different validity values.
Results:
From the internal test, our AI model achieved an inference speed of 31.53 frames per second. GIM detection showed sensitivity, specificity, positive predictive, negative predictive, accuracy, and mean intersection over union in GIM segmentation values of 93%, 80%, 82%, 92%, 87%, and 57%, respectively.
Conclusions
The bilateral segmentation network combined with transfer learning, contrast-limited adaptive histogram equalization, and data augmentation can provide high sensitivity and good accuracy for GIM detection and segmentation.
2.Performance comparison between two computer-aided detection colonoscopy models by trainees using different false positive thresholds: a cross-sectional study in Thailand
Kasenee TIANKANON ; Julalak KARUEHARDSUWAN ; Satimai ANIWAN ; Parit MEKAROONKAMOL ; Panukorn SUNTHORNWECHAPONG ; Huttakan NAVADURONG ; Kittithat TANTITANAWAT ; Krittaya MEKRITTHIKRAI ; Salin SAMUTRANGSI ; Peerapon VATEEKUL ; Rungsun RERKNIMITR
Clinical Endoscopy 2024;57(2):217-225
Background/Aims:
This study aims to compare polyp detection performance of “Deep-GI,” a newly developed artificial intelligence (AI) model, to a previously validated AI model computer-aided polyp detection (CADe) using various false positive (FP) thresholds and determining the best threshold for each model.
Methods:
Colonoscopy videos were collected prospectively and reviewed by three expert endoscopists (gold standard), trainees, CADe (CAD EYE; Fujifilm Corp.), and Deep-GI. Polyp detection sensitivity (PDS), polyp miss rates (PMR), and false-positive alarm rates (FPR) were compared among the three groups using different FP thresholds for the duration of bounding boxes appearing on the screen.
Results:
In total, 170 colonoscopy videos were used in this study. Deep-GI showed the highest PDS (99.4% vs. 85.4% vs. 66.7%, p<0.01) and the lowest PMR (0.6% vs. 14.6% vs. 33.3%, p<0.01) when compared to CADe and trainees, respectively. Compared to CADe, Deep-GI demonstrated lower FPR at FP thresholds of ≥0.5 (12.1 vs. 22.4) and ≥1 second (4.4 vs. 6.8) (both p<0.05). However, when the threshold was raised to ≥1.5 seconds, the FPR became comparable (2 vs. 2.4, p=0.3), while the PMR increased from 2% to 10%.
Conclusions
Compared to CADe, Deep-GI demonstrated a higher PDS with significantly lower FPR at ≥0.5- and ≥1-second thresholds. At the ≥1.5-second threshold, both systems showed comparable FPR with increased PMR.