1.Construction and verification of intelligent endoscopic image analysis system for monitoring upper gastrointestinal blind spots
Xiaoquan ZENG ; Zehua DONG ; Lianlian WU ; Yanxia LI ; Yunchao DENG ; Honggang YU
Chinese Journal of Digestive Endoscopy 2024;41(5):391-396
Objective:To construct an intelligent endoscopic image analysis system that could monitor the blind spot of the upper gastrointestinal tract, and to test its performance.Methods:A total of 87 167 upper gastrointestinal endoscopy images (dataset 1) including 75 551 for training and 11 616 for testing, and a total of 2 414 pharyngeal images (dataset 2) including 2 233 for training and 181 for testing were retrospectively collected from the Digestive Endoscopy Center of Renmin Hospital of Wuhan University between 2016 to 2020. A 27-category-classification model for blind spot monitoring in the upper gastrointestinal tract (model 1, which distinguished 27 anatomical sites such as the pharynx, esophagus, and stomach) and a 5-category-classification model for blind spot monitoring in the pharynx (model 2, which distinguished palate, posterior pharyngeal wall, larynx, left and right pyriform sinuses) were constructed. The above models were trained and tested based on dataset 1 and 2, respectively, and trained based on the EfficientNet-B4, ResNet50 and VGG16 models of the keras framework. Thirty complete upper gastrointestinal endoscopy videos were retrospectively collected from the Digestive Endoscopy Center of Renmin Hospital of Wuhan University in 2021 to test model 2 blind spot monitoring performance.Results:The cross-sectional comparison results of the accuracy of model 1 in identifying 27 anatomical sites of the upper gastrointestinal tract in images showed that the mean accuracy of EfficientNet-B4, ResNet50, and VGG16 were 90.90%, 90.24%, and 89.22%, respectively, with the EfficientNet-B4 model performance the best, and the accuracy of EfficientNet-B4 model for each site ranged from 80.49% to 97.80%. The cross-sectional comparison results of the accuracy of model 2 in identifying the 5 anatomical sites of the pharynx in the images showed that the mean accuracy of EfficientNet-B4, ResNet50, and VGG16 were 99.40%, 98.56%, and 97.01%, respectively, in which the EfficientNet-B4 model had the best performance, and the accuracy of EfficientNet-B4 model for each site ranged from 96.15% to 100.00%. The overall accuracy of model 2 in identifying the 5 anatomical sites of the pharynx in the video was 97.33% (146/150).Conclusion:The intelligent endoscopic image analysis system based on deep learning can monitor blind spots in the upper gastrointestinal tract, coupled with pharyngeal blind spot monitoring and esophagogastroduodenal blind spot monitoring functions. The system shows high accuracy in both images and videos, which is expected to have a potential role in clinical practice and assisting endoscopists to achieve full observation of the upper gastrointestinal tract.
2.An artificial intelligence system based on multi-modal endoscopic images for the diagnosis of gastric neoplasms (with video)
Xiao TAO ; Lianlian WU ; Hongliu DU ; Zehua DONG ; Honggang YU
Chinese Journal of Digestive Endoscopy 2024;41(9):690-696
Objective:To develop an artificial intelligence model based on multi-modal endoscopic images for identifying gastric neoplasms and to compare its diagnostic efficacy with traditional models and endoscopists.Methods:A total of 3 267 images of gastric neoplasms and non-neoplastic lesions under white light (WL) endoscopy and weak magnification (WM) endoscopy from 463 patients at the Digestive Endoscopy Center of Renmin Hospital of Wuhan University from March 2018 to December 2019 were utilized. Two single-modal models (WL model and WM model) were constructed based on WL and WM images separately. WL and WM images of corresponding lesions were combined into image pairs for creating a multi-modal (MM) characteristics integration model. A test set consisting of 696 images of 102 lesions from 97 patients from March 2020 to March 2021 was used to compare the diagnostic efficacy of the single-modal models and a multi-modal model for gastric neoplastic lesions at both the image and the lesion levels. Additionally, video clips of 80 lesions from 80 patients from January 2022 to June 2022 were employed to compare diagnostic efficacy of the WM model, the MM model and 7 endoscopists at the lesion level for gastric neoplasms.Results:In the image test set, the sensitivity and accuracy of MM model were 84.96% (576/678), and 86.89% (1 220/1 289), respectively, for diagnosing gastric neoplasms at the image level, which were superior to 63.13% (113/179) and 80.59% (353/438) of WM model ( χ2=42.81, P<0.001; χ2=10.33, P=0.001), and also better than those of WL model [70.47% (74/105), χ2=13.52, P<0.001; 67.82% (175/258), χ2=57.27, P<0.001]. The MM model showed a sensitivity of 87.50% (28/32), a specificity of 88.57% (62/70), and an accuracy of 88.24% (90/102) at the lesion level. The specificity ( χ2=22.99, P<0.001) and accuracy ( χ2=19.06, P<0.001) were significantly higher than those of WL model; however, there was no significant difference compared with those of the WM model ( P>0.05). In the video test, the sensitivity, specificity and accuracy of the MM model at the lesion level were 95.00% (19/20), 93.33% (56/60) and 93.75% (75/80). These results were significantly better than those of endoscopists, who had a sensitivity of 77.14% (108/140), a specificity of 79.29% (333/420), and an accuracy of 78.75% (441/560), with significant differences ( χ2=18.62, P<0.001; χ2=35.07, P<0.001; χ2=53.12, P<0.001), and was higher than the sensitivity of advanced endoscopists [83.33% (50/60)] with significant difference ( χ2=4.23, P=0.040). Conclusion:The artificial intelligence model based on multi-modal endoscopic images for the diagnosis of gastric neoplasms shows high efficacy in both image and video test sets, outperforming the average diagnostic performance of endoscopists in the video test.
3.Artificial intelligence-assisted diagnosis system of Helicobacter pylori infection based on deep learning
Mengjiao ZHANG ; Lianlian WU ; Daqi XING ; Zehua DONG ; Yijie ZHU ; Shan HU ; Honggang YU
Chinese Journal of Digestive Endoscopy 2023;40(2):109-114
Objective:To construct an artificial intelligence-assisted diagnosis system to recognize the characteristics of Helicobacter pylori ( HP) infection under endoscopy, and evaluate its performance in real clinical cases. Methods:A total of 1 033 cases who underwent 13C-urea breath test and gastroscopy in the Digestive Endoscopy Center of Renmin Hospital of Wuhan University from January 2020 to March 2021 were collected retrospectively. Patients with positive results of 13C-urea breath test (which were defined as HP infertion) were assigned to the case group ( n=485), and those with negative results to the control group ( n=548). Gastroscopic images of various mucosal features indicating HP positive and negative, as well as the gastroscopic images of HP positive and negative cases were randomly assigned to the training set, validation set and test set with at 8∶1∶1. An artificial intelligence-assisted diagnosis system for identifying HP infection was developed based on convolutional neural network (CNN) and long short-term memory network (LSTM). In the system, CNN can identify and extract mucosal features of endoscopic images of each patient, generate feature vectors, and then LSTM receives feature vectors to comprehensively judge HP infection status. The diagnostic performance of the system was evaluated by sensitivity, specificity, accuracy and area under receiver operating characteristic curve (AUC). Results:The diagnostic accuracy of this system for nodularity, atrophy, intestinal metaplasia, xanthoma, diffuse redness + spotty redness, mucosal swelling + enlarged fold + sticky mucus and HP negative features was 87.5% (14/16), 74.1% (83/112), 90.0% (45/50), 88.0% (22/25), 63.3% (38/60), 80.1% (238/297) and 85.7% (36 /42), respectively. The sensitivity, specificity, accuracy and AUC of the system for predicting HP infection was 89.6% (43/48), 61.8% (34/55), 74.8% (77/103), and 0.757, respectively. The diagnostic accuracy of the system was equivalent to that of endoscopist in diagnosing HP infection under white light (74.8% VS 72.1%, χ2=0.246, P=0.620). Conclusion:The system developed in this study shows noteworthy ability in evaluating HP status, and can be used to assist endoscopists to diagnose HP infection.
4.Cost-effectiveness analysis of an artificial intelligence-assisted diagnosis and treatment system for gastrointestinal endoscopy
Jia LI ; Lianlian WU ; Dairu DU ; Jun LIU ; Qing WANG ; Zi LUO ; Honggang YU
Chinese Journal of Digestive Endoscopy 2023;40(3):206-211
Objective:To analyze the cost-effectiveness of a relatively mature artificial intelligence (AI)-assisted diagnosis and treatment system (ENDOANGEL) for gastrointestinal endoscopy in China, and to provide objective and effective data support for hospital acquisition decision.Methods:The number of gastrointestinal endoscopy procedures at the Endoscopy Center of Renmin Hospital of Wuhan University from January 2017 to December 2019 were collected to predict the procedures of gastrointestinal endoscopy during the service life (10 years) of ENDOANGEL. The net present value, payback period and average rate of return were used to analyze the cost-effectiveness of ENDOANGEL.Results:The net present value of an ENDOANGEL in the expected service life (10 years) was 6 724 100 yuan, the payback period was 1.10 years, and the average rate of return reached 147.84%.Conclusion:ENDOANGEL shows significant economic benefits, and it is reasonable for hospitals to acquire mature AI-assisted diagnosis and treatment system for gastrointestinal endoscopy.
5.Evaluation of an assistant diagnosis system for gastric neoplastic lesions under white light endoscopy based on artificial intelligence
Junxiao WANG ; Zehua DONG ; Ming XU ; Lianlian WU ; Mengjiao ZHANG ; Yijie ZHU ; Xiao TAO ; Hongliu DU ; Chenxia ZHANG ; Xinqi HE ; Honggang YU
Chinese Journal of Digestive Endoscopy 2023;40(4):293-297
Objective:To assess the diagnostic efficacy of upper gastrointestinal endoscopic image assisted diagnosis system (ENDOANGEL-LD) based on artificial intelligence (AI) for detecting gastric lesions and neoplastic lesions under white light endoscopy.Methods:The diagnostic efficacy of ENDOANGEL-LD was tested using image testing dataset and video testing dataset, respectively. The image testing dataset included 300 images of gastric neoplastic lesions, 505 images of non-neoplastic lesions and 990 images of normal stomach of 191 patients in Renmin Hospital of Wuhan University from June 2019 to September 2019. Video testing dataset was from 83 videos (38 gastric neoplastic lesions and 45 non-neoplastic lesions) of 78 patients in Renmin Hospital of Wuhan University from November 2020 to April 2021. The accuracy, the sensitivity and the specificity of ENDOANGEL-LD for image testing dataset were calculated. The accuracy, the sensitivity and the specificity of ENDOANGEL-LD in video testing dataset for gastric neoplastic lesions were compared with those of four senior endoscopists.Results:In the image testing dataset, the accuracy, the sensitivity, the specificity of ENDOANGEL-LD for gastric lesions were 93.9% (1 685/1 795), 98.0% (789/805) and 90.5% (896/990) respectively; while the accuracy, the sensitivity and the specificity of ENDOANGEL-LD for gastric neoplastic lesions were 88.7% (714/805), 91.0% (273/300) and 87.3% (441/505) respectively. In the video testing dataset, the sensitivity [100.0% (38/38) VS 85.5% (130/152), χ2=6.220, P=0.013] of ENDOANGEL-LD was higher than that of four senior endoscopists. The accuracy [81.9% (68/83) VS 72.0% (239/332), χ2=3.408, P=0.065] and the specificity [ 66.7% (30/45) VS 60.6% (109/180), χ2=0.569, P=0.451] of ENDOANGEL-LD were comparable with those of four senior endoscopists. Conclusion:The ENDOANGEL-LD can accurately detect gastric lesions and further diagnose neoplastic lesions to help endoscopists in clinical work.
6.Application of an artificial intelligence-assisted endoscopic diagnosis system to the detection of focal gastric lesions (with video)
Mengjiao ZHANG ; Ming XU ; Lianlian WU ; Junxiao WANG ; Zehua DONG ; Yijie ZHU ; Xinqi HE ; Xiao TAO ; Hongliu DU ; Chenxia ZHANG ; Yutong BAI ; Renduo SHANG ; Hao LI ; Hao KUANG ; Shan HU ; Honggang YU
Chinese Journal of Digestive Endoscopy 2023;40(5):372-378
Objective:To construct a real-time artificial intelligence (AI)-assisted endoscepic diagnosis system based on YOLO v3 algorithm, and to evaluate its ability of detecting focal gastric lesions in gastroscopy.Methods:A total of 5 488 white light gastroscopic images (2 733 images with gastric focal lesions and 2 755 images without gastric focal lesions) from June to November 2019 and videos of 92 cases (288 168 clear stomach frames) from May to June 2020 at the Digestive Endoscopy Center of Renmin Hospital of Wuhan University were retrospectively collected for AI System test. A total of 3 997 prospective consecutive patients undergoing gastroscopy at the Digestive Endoscopy Center of Renmin Hospital of Wuhan University from July 6, 2020 to November 27, 2020 and May 6, 2021 to August 2, 2021 were enrolled to assess the clinical applicability of AI System. When AI System recognized an abnormal lesion, it marked the lesion with a blue box as a warning. The ability to identify focal gastric lesions and the frequency and causes of false positives and false negatives of AI System were statistically analyzed.Results:In the image test set, the accuracy, the sensitivity, the specificity, the positive predictive value and the negative predictive value of AI System were 92.3% (5 064/5 488), 95.0% (2 597/2 733), 89.5% (2 467/ 2 755), 90.0% (2 597/2 885) and 94.8% (2 467/2 603), respectively. In the video test set, the accuracy, the sensitivity, the specificity, the positive predictive value and the negative predictive value of AI System were 95.4% (274 792/288 168), 95.2% (109 727/115 287), 95.5% (165 065/172 881), 93.4% (109 727/117 543) and 96.7% (165 065/170 625), respectively. In clinical application, the detection rate of local gastric lesions by AI System was 93.0% (6 830/7 344). A total of 514 focal gastric lesions were missed by AI System. The main reasons were punctate erosions (48.8%, 251/514), diminutive xanthomas (22.8%, 117/514) and diminutive polyps (21.4%, 110/514). The mean number of false positives per gastroscopy was 2 (1, 4), most of which were due to normal mucosa folds (50.2%, 5 635/11 225), bubbles and mucus (35.0%, 3 928/11 225), and liquid deposited in the fundus (9.1%, 1 021/11 225).Conclusion:The application of AI System can increase the detection rate of focal gastric lesions.
7.Effectiveness of artificial intelligence-endoscopic ultrasound biliary and pancreatic recognition system: a crossover study
Boru CHEN ; Liwen YAO ; Lihui ZHANG ; Zihua LU ; Huiling WU ; Honggang YU
Chinese Journal of Digestive Endoscopy 2023;40(10):778-783
Objective:To explore the effectiveness of the artificial intelligence-endoscopic ultrasound (AI-EUS) biliary and pancreatic recognition system in assisting the recognition of EUS images.Methods:Subjects who received EUS due to suspicious biliary and pancreatic diseases from December 2019 to August 2020 were prospectively collected from the database of Department of Gastroenterology, Renmin Hospital of Wuhan University. Pancreatic EUS images of 28 subjects were included for recognition of pancreas standard station. EUS images of bile duct of 29 subjects were included for recognition of bile duct standard station. Eight new endoscopists from the Gastroenterology Department of Renmin Hospital of Wuhan University read the 57 EUS videos with and without the assistance of AI-EUS biliary and pancreatic recognition system. Accuracy of endoscopists' identification of biliary and pancreatic standard sites with and without the assistance of AI-EUS was compared.Results:The accuracy of pancreas standard station identification of the new endoscopists increased from 67.2% (903/1 344) to 78.4% (1 054/1 344) with the assistance of AI-EUS. The accuracy of bile duct standard station identification increased from 56.4% (523/928) to 73.8% (685/928).Conclusion:AI-EUS biliary and pancreatic recognition system can improve the accuracy of EUS images recognition of biliary and pancreatic system, which can assist diagnosis in clinical work.
8.Cost-effectiveness of early gastric cancer screening using an artificial intelligence gastroscopy-assisted system
Li HUANG ; Lianlian WU ; Yijie ZHU ; Honggang YU
Chinese Journal of Digestive Endoscopy 2023;40(12):1001-1005
Objective:To compare the cost-effectiveness before and after using an artificial intelligence gastroscopy-assisted system for early gastric cancer screening.Methods:The gastroscopy cases before (non-AI group) and after (AI group) the use of artificial intelligence gastroscopy-assisted system were retrospectively collected in Renmin Hospital of Wuhan University from January 1, 2017 to February 28, 2022. The proportion of early gastric cancer among all gastric cancer was analyzed. Costs were estimated based on the standards of Renmin Hospital of Wuhan University and the 2021 edition of Wuhan Disease Diagnosis-related Group Payment Standards. Cost-effectiveness analysis was conducted per 100 thousand cases with and without the system. And the incremental cost-effectiveness ratio was calculated.Results:For the non-AI group, the proportion of early gastric cancer among all gastric cancer was 28.81% (70/243). The cost of gastroscopy screening per 100 thousand was 54 598.0 thousand yuan, early gastric treatment cost was 221.8 thousand yuan, and a total cost was 54 819.8 thousand yuan. The direct effectiveness was 894.2 thousand yuan, the indirect effectiveness was 1 828.2 thousand yuan and the total effectiveness was 2 722.4 thousand yuan per 100 thousand cases. For the AI group, the early gastric cancer diagnositic rate was 36.56%(366/1 001), where gastroscopy cost was 53 440.0 thousand yuan, early gastric treatment cost 315.8 thousand yuan, the total cost 53 755.8 thousand yuan. The direct effectiveness was 1 273.5 thousand yuan, indirect effectiveness 2 603.1 thousand yuan and the total effectiveness 3 876.6 thousand yuan per 100 thousand cases. The use of the system reduced the cost of early gastric cancer screening by 1 064.0 thousand yuan, and increased the benefit by 1 154.2 thousand yuan per 100 thousand cases. The incremental cost-effectiveness ratio was -0.92.Conclusion:The use of artificial intelligence gastroscopy-assisted system for gastric early cancer screening can reduce medical costs as well as improve the efficiency of screening, and it is recommended for gastroscopy screening .
9.The application of restricted kinematic alignment in total knee arthroplasty
Zhiqiang SHAO ; Dengxian WU ; Honggang CAI ; Zheng LIU ; Kai SONG ; Xiaofeng ZHANG ; Zhihong XU ; Qing JIANG
Chinese Journal of Orthopaedics 2023;43(16):1076-1084
Objective:To investigate the feasibility and therapeutic effect of total knee arthroplasty (TKA) with cruciate-retaining (CR) prosthesis by using FEM-X1 femoral extramedullary positioning instrument based on the theory of restricted kinematic alignment (rKA).Methods:Thirty five cases who underwent total knee arthroplasty in Nanjing Drum Tower Hospital from November 2019 to December 2020 were retrospectively analyzed, including 7 males and 28 females with an average age of 71± 8 years (ranging from 55 to 85 years) Following the guidance of rKA alignment, the FEM-X1 femoral extramedullary positioning instrument was used for TKA with CR prosthesis. Before the operation, the full-length lower limb radiographs of both anteroposterior and lateral views were taken in a standing position for preoperative assessment of lateral distal femoral angle (LDFA), the medial proximal tibial angle (MPTA) and hip-knee-ankle angle (HKA). According to the principle of rKA alignment, the target LDFA, MPTA and HKA were calculated. The proximal end of tibia and distal end of femur were cut using extramedullary positioning instrument respectively. The posterior femoral condyle was cut according to the tibial plateau and mediolateral soft tissue tension. The prosthesis was installed after osteotomy. The release of collateral ligaments, PCL function, release of PCL, patellofemoral trajectory, release of patellofemoral support band, the amount of blood loss and time of operation were recorded. Postoperative LDFA, MPTA, HKA, and posterior slope angle of tibial prosthesis were measured on X-ray images. The Knee Society Score (KSS) was used for functional evaluation.Results:Thirty five cases of TKA with CR prosthesis following rKA alignment were successfully completed. The operation time was 100 (90, 110) min, and the blood loss was 100 (100, 200) ml. 30 of them were followed up for 12.5±0.7 months (ranging from 12 to 14 months). The pre-operative and post-operative LDFA were 1.0°(-2.0°, 4.0°), 0°(-2.0°, 2.0°), MPTA were -4.0°(-5.0°, -1.0°), -2.0°(-3.0°, -1.0°), HKA were -3.0°(-3.0°, -1.0°), -2.0°(-3.0°, -1.0°). There was no significant difference between the three angles before and after operation ( Z=-0.89, P=0.372; Z=1.87, P=0.061; Z=1.03, P=0.302). The average posterior tibial slope was 5°(3°, 7°). At the follow-up of one year, the KSS clinical score was 94(92, 97) and functional score was 80(70, 90) in 30 cases. During the operation, PCL was released in 1 case because of excessive tension; and lateral retinacular release was performed in 1 case because of poor patellofemoral track. Conclusion:ITKA with CR prosthesis by using the extramedullary positioning instrument on the theory of rKA alignment showed a good clinical outcome at a short follow up.
10.The effect of artificial intelligence system on the diagnosis rate of precancerous state of gastric cancer: a single center self-controlled clinical study
Ying LI ; Qinghong XU ; Lianlian WU ; Honggang YU
Chinese Journal of Digestive Endoscopy 2022;39(7):538-541
Objective:To evaluate the impact of artificial intelligence (AI) system on the diagnosis rate of precancerous state of gastric cancer.Methods:A single center self-controlled study was conducted under the premise that such factors were controlled as mainframe and model of the endoscope, operating doctor, season and climate, and pathology was taken as the gold standard. The diagnosis rate of precancerous state of gastric cancer, including atrophic gastritis (AG) and intestinal metaplasia (IM) in traditional gastroscopy (from September 1, 2019 to November 30, 2019) and AI assisted endoscopy (from September 1, 2020 to November 15, 2020) in the Eighth Hospital of Wuhan was statistically analyzed and compared, and the subgroup analysis was conducted according to the seniority of doctors.Results:Compared with traditional gastroscopy, AI system could significantly improve the diagnosis rate of AG [13.3% (38/286) VS 7.4% (24/323), χ2=5.689, P=0.017] and IM [33.9% (97/286) VS 26.0% (84/323), χ2=4.544, P=0.033]. For the junior doctors (less than 5 years of endoscopic experience), AI system had a more significant effect on the diagnosis rate of AG [11.9% (22/185) VS 5.8% (11/189), χ2=4.284, P=0.038] and IM [30.3% (56/185) VS 20.6% (39/189), χ2=4.580, P=0.032]. For the senior doctors (more than 10 years of endoscopic experience), although the diagnosis rate of AG and IM increased slightly, the difference was not statistically significant. Conclusion:AI system shows the potential to improve the diagnosis rate of precancerous state of gastric cancer, especially for junior endoscopists, and to reduce missed diagnosis of early gastric cancer.

Result Analysis
Print
Save
E-mail