1.Identify drug-drug interactions via deep learning: A real world study.
Jingyang LI ; Yanpeng ZHAO ; Zhenting WANG ; Chunyue LEI ; Lianlian WU ; Yixin ZHANG ; Song HE ; Xiaochen BO ; Jian XIAO
Journal of Pharmaceutical Analysis 2025;15(6):101194-101194
Identifying drug-drug interactions (DDIs) is essential to prevent adverse effects from polypharmacy. Although deep learning has advanced DDI identification, the gap between powerful models and their lack of clinical application and evaluation has hindered clinical benefits. Here, we developed a Multi-Dimensional Feature Fusion model named MDFF, which integrates one-dimensional simplified molecular input line entry system sequence features, two-dimensional molecular graph features, and three-dimensional geometric features to enhance drug representations for predicting DDIs. MDFF was trained and validated on two DDI datasets, evaluated across three distinct scenarios, and compared with advanced DDI prediction models using accuracy, precision, recall, area under the curve, and F1 score metrics. MDFF achieved state-of-the-art performance across all metrics. Ablation experiments showed that integrating multi-dimensional drug features yielded the best results. More importantly, we obtained adverse drug reaction reports uploaded by Xiangya Hospital of Central South University from 2021 to 2023 and used MDFF to identify potential adverse DDIs. Among 12 real-world adverse drug reaction reports, the predictions of 9 reports were supported by relevant evidence. Additionally, MDFF demonstrated the ability to explain adverse DDI mechanisms, providing insights into the mechanisms behind one specific report and highlighting its potential to assist practitioners in improving medical practice.
2.Construction and verification of intelligent endoscopic image analysis system for monitoring upper gastrointestinal blind spots
Xiaoquan ZENG ; Zehua DONG ; Lianlian WU ; Yanxia LI ; Yunchao DENG ; Honggang YU
Chinese Journal of Digestive Endoscopy 2024;41(5):391-396
Objective:To construct an intelligent endoscopic image analysis system that could monitor the blind spot of the upper gastrointestinal tract, and to test its performance.Methods:A total of 87 167 upper gastrointestinal endoscopy images (dataset 1) including 75 551 for training and 11 616 for testing, and a total of 2 414 pharyngeal images (dataset 2) including 2 233 for training and 181 for testing were retrospectively collected from the Digestive Endoscopy Center of Renmin Hospital of Wuhan University between 2016 to 2020. A 27-category-classification model for blind spot monitoring in the upper gastrointestinal tract (model 1, which distinguished 27 anatomical sites such as the pharynx, esophagus, and stomach) and a 5-category-classification model for blind spot monitoring in the pharynx (model 2, which distinguished palate, posterior pharyngeal wall, larynx, left and right pyriform sinuses) were constructed. The above models were trained and tested based on dataset 1 and 2, respectively, and trained based on the EfficientNet-B4, ResNet50 and VGG16 models of the keras framework. Thirty complete upper gastrointestinal endoscopy videos were retrospectively collected from the Digestive Endoscopy Center of Renmin Hospital of Wuhan University in 2021 to test model 2 blind spot monitoring performance.Results:The cross-sectional comparison results of the accuracy of model 1 in identifying 27 anatomical sites of the upper gastrointestinal tract in images showed that the mean accuracy of EfficientNet-B4, ResNet50, and VGG16 were 90.90%, 90.24%, and 89.22%, respectively, with the EfficientNet-B4 model performance the best, and the accuracy of EfficientNet-B4 model for each site ranged from 80.49% to 97.80%. The cross-sectional comparison results of the accuracy of model 2 in identifying the 5 anatomical sites of the pharynx in the images showed that the mean accuracy of EfficientNet-B4, ResNet50, and VGG16 were 99.40%, 98.56%, and 97.01%, respectively, in which the EfficientNet-B4 model had the best performance, and the accuracy of EfficientNet-B4 model for each site ranged from 96.15% to 100.00%. The overall accuracy of model 2 in identifying the 5 anatomical sites of the pharynx in the video was 97.33% (146/150).Conclusion:The intelligent endoscopic image analysis system based on deep learning can monitor blind spots in the upper gastrointestinal tract, coupled with pharyngeal blind spot monitoring and esophagogastroduodenal blind spot monitoring functions. The system shows high accuracy in both images and videos, which is expected to have a potential role in clinical practice and assisting endoscopists to achieve full observation of the upper gastrointestinal tract.
3.An artificial intelligence system based on multi-modal endoscopic images for the diagnosis of gastric neoplasms (with video)
Xiao TAO ; Lianlian WU ; Hongliu DU ; Zehua DONG ; Honggang YU
Chinese Journal of Digestive Endoscopy 2024;41(9):690-696
Objective:To develop an artificial intelligence model based on multi-modal endoscopic images for identifying gastric neoplasms and to compare its diagnostic efficacy with traditional models and endoscopists.Methods:A total of 3 267 images of gastric neoplasms and non-neoplastic lesions under white light (WL) endoscopy and weak magnification (WM) endoscopy from 463 patients at the Digestive Endoscopy Center of Renmin Hospital of Wuhan University from March 2018 to December 2019 were utilized. Two single-modal models (WL model and WM model) were constructed based on WL and WM images separately. WL and WM images of corresponding lesions were combined into image pairs for creating a multi-modal (MM) characteristics integration model. A test set consisting of 696 images of 102 lesions from 97 patients from March 2020 to March 2021 was used to compare the diagnostic efficacy of the single-modal models and a multi-modal model for gastric neoplastic lesions at both the image and the lesion levels. Additionally, video clips of 80 lesions from 80 patients from January 2022 to June 2022 were employed to compare diagnostic efficacy of the WM model, the MM model and 7 endoscopists at the lesion level for gastric neoplasms.Results:In the image test set, the sensitivity and accuracy of MM model were 84.96% (576/678), and 86.89% (1 220/1 289), respectively, for diagnosing gastric neoplasms at the image level, which were superior to 63.13% (113/179) and 80.59% (353/438) of WM model ( χ2=42.81, P<0.001; χ2=10.33, P=0.001), and also better than those of WL model [70.47% (74/105), χ2=13.52, P<0.001; 67.82% (175/258), χ2=57.27, P<0.001]. The MM model showed a sensitivity of 87.50% (28/32), a specificity of 88.57% (62/70), and an accuracy of 88.24% (90/102) at the lesion level. The specificity ( χ2=22.99, P<0.001) and accuracy ( χ2=19.06, P<0.001) were significantly higher than those of WL model; however, there was no significant difference compared with those of the WM model ( P>0.05). In the video test, the sensitivity, specificity and accuracy of the MM model at the lesion level were 95.00% (19/20), 93.33% (56/60) and 93.75% (75/80). These results were significantly better than those of endoscopists, who had a sensitivity of 77.14% (108/140), a specificity of 79.29% (333/420), and an accuracy of 78.75% (441/560), with significant differences ( χ2=18.62, P<0.001; χ2=35.07, P<0.001; χ2=53.12, P<0.001), and was higher than the sensitivity of advanced endoscopists [83.33% (50/60)] with significant difference ( χ2=4.23, P=0.040). Conclusion:The artificial intelligence model based on multi-modal endoscopic images for the diagnosis of gastric neoplasms shows high efficacy in both image and video test sets, outperforming the average diagnostic performance of endoscopists in the video test.
4.Artificial intelligence-assisted diagnosis system of Helicobacter pylori infection based on deep learning
Mengjiao ZHANG ; Lianlian WU ; Daqi XING ; Zehua DONG ; Yijie ZHU ; Shan HU ; Honggang YU
Chinese Journal of Digestive Endoscopy 2023;40(2):109-114
Objective:To construct an artificial intelligence-assisted diagnosis system to recognize the characteristics of Helicobacter pylori ( HP) infection under endoscopy, and evaluate its performance in real clinical cases. Methods:A total of 1 033 cases who underwent 13C-urea breath test and gastroscopy in the Digestive Endoscopy Center of Renmin Hospital of Wuhan University from January 2020 to March 2021 were collected retrospectively. Patients with positive results of 13C-urea breath test (which were defined as HP infertion) were assigned to the case group ( n=485), and those with negative results to the control group ( n=548). Gastroscopic images of various mucosal features indicating HP positive and negative, as well as the gastroscopic images of HP positive and negative cases were randomly assigned to the training set, validation set and test set with at 8∶1∶1. An artificial intelligence-assisted diagnosis system for identifying HP infection was developed based on convolutional neural network (CNN) and long short-term memory network (LSTM). In the system, CNN can identify and extract mucosal features of endoscopic images of each patient, generate feature vectors, and then LSTM receives feature vectors to comprehensively judge HP infection status. The diagnostic performance of the system was evaluated by sensitivity, specificity, accuracy and area under receiver operating characteristic curve (AUC). Results:The diagnostic accuracy of this system for nodularity, atrophy, intestinal metaplasia, xanthoma, diffuse redness + spotty redness, mucosal swelling + enlarged fold + sticky mucus and HP negative features was 87.5% (14/16), 74.1% (83/112), 90.0% (45/50), 88.0% (22/25), 63.3% (38/60), 80.1% (238/297) and 85.7% (36 /42), respectively. The sensitivity, specificity, accuracy and AUC of the system for predicting HP infection was 89.6% (43/48), 61.8% (34/55), 74.8% (77/103), and 0.757, respectively. The diagnostic accuracy of the system was equivalent to that of endoscopist in diagnosing HP infection under white light (74.8% VS 72.1%, χ2=0.246, P=0.620). Conclusion:The system developed in this study shows noteworthy ability in evaluating HP status, and can be used to assist endoscopists to diagnose HP infection.
5.Cost-effectiveness analysis of an artificial intelligence-assisted diagnosis and treatment system for gastrointestinal endoscopy
Jia LI ; Lianlian WU ; Dairu DU ; Jun LIU ; Qing WANG ; Zi LUO ; Honggang YU
Chinese Journal of Digestive Endoscopy 2023;40(3):206-211
Objective:To analyze the cost-effectiveness of a relatively mature artificial intelligence (AI)-assisted diagnosis and treatment system (ENDOANGEL) for gastrointestinal endoscopy in China, and to provide objective and effective data support for hospital acquisition decision.Methods:The number of gastrointestinal endoscopy procedures at the Endoscopy Center of Renmin Hospital of Wuhan University from January 2017 to December 2019 were collected to predict the procedures of gastrointestinal endoscopy during the service life (10 years) of ENDOANGEL. The net present value, payback period and average rate of return were used to analyze the cost-effectiveness of ENDOANGEL.Results:The net present value of an ENDOANGEL in the expected service life (10 years) was 6 724 100 yuan, the payback period was 1.10 years, and the average rate of return reached 147.84%.Conclusion:ENDOANGEL shows significant economic benefits, and it is reasonable for hospitals to acquire mature AI-assisted diagnosis and treatment system for gastrointestinal endoscopy.
6.Evaluation of an assistant diagnosis system for gastric neoplastic lesions under white light endoscopy based on artificial intelligence
Junxiao WANG ; Zehua DONG ; Ming XU ; Lianlian WU ; Mengjiao ZHANG ; Yijie ZHU ; Xiao TAO ; Hongliu DU ; Chenxia ZHANG ; Xinqi HE ; Honggang YU
Chinese Journal of Digestive Endoscopy 2023;40(4):293-297
Objective:To assess the diagnostic efficacy of upper gastrointestinal endoscopic image assisted diagnosis system (ENDOANGEL-LD) based on artificial intelligence (AI) for detecting gastric lesions and neoplastic lesions under white light endoscopy.Methods:The diagnostic efficacy of ENDOANGEL-LD was tested using image testing dataset and video testing dataset, respectively. The image testing dataset included 300 images of gastric neoplastic lesions, 505 images of non-neoplastic lesions and 990 images of normal stomach of 191 patients in Renmin Hospital of Wuhan University from June 2019 to September 2019. Video testing dataset was from 83 videos (38 gastric neoplastic lesions and 45 non-neoplastic lesions) of 78 patients in Renmin Hospital of Wuhan University from November 2020 to April 2021. The accuracy, the sensitivity and the specificity of ENDOANGEL-LD for image testing dataset were calculated. The accuracy, the sensitivity and the specificity of ENDOANGEL-LD in video testing dataset for gastric neoplastic lesions were compared with those of four senior endoscopists.Results:In the image testing dataset, the accuracy, the sensitivity, the specificity of ENDOANGEL-LD for gastric lesions were 93.9% (1 685/1 795), 98.0% (789/805) and 90.5% (896/990) respectively; while the accuracy, the sensitivity and the specificity of ENDOANGEL-LD for gastric neoplastic lesions were 88.7% (714/805), 91.0% (273/300) and 87.3% (441/505) respectively. In the video testing dataset, the sensitivity [100.0% (38/38) VS 85.5% (130/152), χ2=6.220, P=0.013] of ENDOANGEL-LD was higher than that of four senior endoscopists. The accuracy [81.9% (68/83) VS 72.0% (239/332), χ2=3.408, P=0.065] and the specificity [ 66.7% (30/45) VS 60.6% (109/180), χ2=0.569, P=0.451] of ENDOANGEL-LD were comparable with those of four senior endoscopists. Conclusion:The ENDOANGEL-LD can accurately detect gastric lesions and further diagnose neoplastic lesions to help endoscopists in clinical work.
7.Application of an artificial intelligence-assisted endoscopic diagnosis system to the detection of focal gastric lesions (with video)
Mengjiao ZHANG ; Ming XU ; Lianlian WU ; Junxiao WANG ; Zehua DONG ; Yijie ZHU ; Xinqi HE ; Xiao TAO ; Hongliu DU ; Chenxia ZHANG ; Yutong BAI ; Renduo SHANG ; Hao LI ; Hao KUANG ; Shan HU ; Honggang YU
Chinese Journal of Digestive Endoscopy 2023;40(5):372-378
Objective:To construct a real-time artificial intelligence (AI)-assisted endoscepic diagnosis system based on YOLO v3 algorithm, and to evaluate its ability of detecting focal gastric lesions in gastroscopy.Methods:A total of 5 488 white light gastroscopic images (2 733 images with gastric focal lesions and 2 755 images without gastric focal lesions) from June to November 2019 and videos of 92 cases (288 168 clear stomach frames) from May to June 2020 at the Digestive Endoscopy Center of Renmin Hospital of Wuhan University were retrospectively collected for AI System test. A total of 3 997 prospective consecutive patients undergoing gastroscopy at the Digestive Endoscopy Center of Renmin Hospital of Wuhan University from July 6, 2020 to November 27, 2020 and May 6, 2021 to August 2, 2021 were enrolled to assess the clinical applicability of AI System. When AI System recognized an abnormal lesion, it marked the lesion with a blue box as a warning. The ability to identify focal gastric lesions and the frequency and causes of false positives and false negatives of AI System were statistically analyzed.Results:In the image test set, the accuracy, the sensitivity, the specificity, the positive predictive value and the negative predictive value of AI System were 92.3% (5 064/5 488), 95.0% (2 597/2 733), 89.5% (2 467/ 2 755), 90.0% (2 597/2 885) and 94.8% (2 467/2 603), respectively. In the video test set, the accuracy, the sensitivity, the specificity, the positive predictive value and the negative predictive value of AI System were 95.4% (274 792/288 168), 95.2% (109 727/115 287), 95.5% (165 065/172 881), 93.4% (109 727/117 543) and 96.7% (165 065/170 625), respectively. In clinical application, the detection rate of local gastric lesions by AI System was 93.0% (6 830/7 344). A total of 514 focal gastric lesions were missed by AI System. The main reasons were punctate erosions (48.8%, 251/514), diminutive xanthomas (22.8%, 117/514) and diminutive polyps (21.4%, 110/514). The mean number of false positives per gastroscopy was 2 (1, 4), most of which were due to normal mucosa folds (50.2%, 5 635/11 225), bubbles and mucus (35.0%, 3 928/11 225), and liquid deposited in the fundus (9.1%, 1 021/11 225).Conclusion:The application of AI System can increase the detection rate of focal gastric lesions.
8.Cost-effectiveness of early gastric cancer screening using an artificial intelligence gastroscopy-assisted system
Li HUANG ; Lianlian WU ; Yijie ZHU ; Honggang YU
Chinese Journal of Digestive Endoscopy 2023;40(12):1001-1005
Objective:To compare the cost-effectiveness before and after using an artificial intelligence gastroscopy-assisted system for early gastric cancer screening.Methods:The gastroscopy cases before (non-AI group) and after (AI group) the use of artificial intelligence gastroscopy-assisted system were retrospectively collected in Renmin Hospital of Wuhan University from January 1, 2017 to February 28, 2022. The proportion of early gastric cancer among all gastric cancer was analyzed. Costs were estimated based on the standards of Renmin Hospital of Wuhan University and the 2021 edition of Wuhan Disease Diagnosis-related Group Payment Standards. Cost-effectiveness analysis was conducted per 100 thousand cases with and without the system. And the incremental cost-effectiveness ratio was calculated.Results:For the non-AI group, the proportion of early gastric cancer among all gastric cancer was 28.81% (70/243). The cost of gastroscopy screening per 100 thousand was 54 598.0 thousand yuan, early gastric treatment cost was 221.8 thousand yuan, and a total cost was 54 819.8 thousand yuan. The direct effectiveness was 894.2 thousand yuan, the indirect effectiveness was 1 828.2 thousand yuan and the total effectiveness was 2 722.4 thousand yuan per 100 thousand cases. For the AI group, the early gastric cancer diagnositic rate was 36.56%(366/1 001), where gastroscopy cost was 53 440.0 thousand yuan, early gastric treatment cost 315.8 thousand yuan, the total cost 53 755.8 thousand yuan. The direct effectiveness was 1 273.5 thousand yuan, indirect effectiveness 2 603.1 thousand yuan and the total effectiveness 3 876.6 thousand yuan per 100 thousand cases. The use of the system reduced the cost of early gastric cancer screening by 1 064.0 thousand yuan, and increased the benefit by 1 154.2 thousand yuan per 100 thousand cases. The incremental cost-effectiveness ratio was -0.92.Conclusion:The use of artificial intelligence gastroscopy-assisted system for gastric early cancer screening can reduce medical costs as well as improve the efficiency of screening, and it is recommended for gastroscopy screening .
9.An artificial intelligence-based system for measuring the size of gastrointestinal lesions under endoscopy (with video)
Jing WANG ; Xi CHEN ; Lianlian WU ; Wei ZHOU ; Chenxia ZHANG ; Renquan LUO ; Honggang YU
Chinese Journal of Digestive Endoscopy 2022;39(12):965-971
Objective:To develop an artificial intelligence-based system for measuring the size of gastrointestinal lesions under white light endoscopy in real time.Methods:The system consisted of 3 models. Model 1 was used to identify the biopsy forceps and mark the contour of the forceps in continuous pictures of the video. The results of model 1 were submitted to model 2 and classified into open and closed forceps. And model 3 was used to identify the lesions and mark the boundary of lesions in real time. Then the length of the lesions was compared with the contour of the forceps to calculate the size of lesions. Dataset 1 consisted of 4 835 images collected retrospectively from January 1, 2017 to November 30, 2019 in Renmin Hospital of Wuhan University, which were used for model training and validation. Dataset 2 consisted of images collected prospectively from December 1, 2019 to June 4, 2020 at the Endoscopy Center of Renmin Hospital of Wuhan University, which were used to test the ability of the model to segment the boundary of the biopsy forceps and lesions. Dataset 3 consisted of 302 images of 151 simulated lesions, each of which included one image of a larger tilt angle (45° from the vertical line of the lesion) and one image of a smaller tilt angle (10° from the vertical line of the lesion) to test the ability of the model to measure the lesion size with the biopsy forceps in different states. Dataset 4 was a video test set, which consisted of prospectively collected videos taken from the Endoscopy Center of Renmin Hospital of Wuhan University from August 5, 2019 to September 4, 2020. The accuracy of model 1 in identifying the presence or absence of biopsy forceps, model 2 in classifying the status of biopsy forceps (open or closed) and model 3 in identifying the presence or absence of lesions were observed with the results of endoscopist review or endoscopic surgery pathology as the gold standard. Intersection over union (IoU) was used to evaluate the segmentation effect of biopsy forceps in model 1 and lesion segmentation effect in model 3, and the absolute error and relative error were used to evaluate the ability of the system to measure lesion size.Results:(1)A total of 1 252 images were included in dataset 2, including 821 images of forceps (401 images of open forceps and 420 images of closed forceps), 431 images of non-forceps, 640 images of lesions and 612 images of non-lesions. Model 1 judged 433 images of non-forceps (430 images were accurate) and 819 images of forceps (818 images were accurate), and the accuracy was 99.68% (1 248/1 252). Based on the data of 818 images of forceps to evaluate the accuracy of model 1 on judging the segmentation effect of biopsy forceps lobe, the mean IoU was 0.91 (95% CI: 0.90-0.92). The classification accuracy of model 2 was evaluated by using 818 forceps pictures accurately judged by model 1. Model 2 judged 384 open forceps pictures (382 accurate) and 434 closed forceps pictures (416 accurate), and the classification accuracy of model 2 was 97.56% (798/818). Model 3 judged 654 images containing lesions (626 images were accurate) and 598 images of non-lesions (584 images were accurate), and the accuracy was 96.65% (1 210/1 252). Based on 626 images of lesions accurately judged by model 3, the mean IoU was 0.86 (95% CI: 0.85-0.87). (2) In dataset 3, the mean absolute error of systematic lesion size measurement was 0.17 mm (95% CI: 0.08-0.28 mm) and the mean relative error was 3.77% (95% CI: 0.00%-10.85%) when the tilt angle of biopsy forceps was small. The mean absolute error of systematic lesion size measurement was 0.17 mm (95% CI: 0.09-0.26 mm) and the mean relative error was 4.02% (95% CI: 2.90%-5.14%) when the biopsy forceps was tilted at a large angle. (3) In dataset 4, a total of 780 images of 59 endoscopic examination videos of 59 patients were included. The mean absolute error of systematic lesion size measurement was 0.24 mm (95% CI: 0.00-0.67 mm), and the mean relative error was 9.74% (95% CI: 0.00%-29.83%). Conclusion:The system could measure the size of endoscopic gastrointestinal lesions accurately and may improve the accuracy of endoscopists.
10.Bifidobacterium animalis subsp. lactis BB-12 alleviates hippocampal neuroinflammation and cognitive dysfunction of mice after whole brain irradiation
Shan YANG ; Lianlian WU ; Wen GUO ; Yunhe DING ; Haibei DONG ; Xiaojin WU
Chinese Journal of Radiological Medicine and Protection 2022;42(11):823-829
Objective:To investigate the effects of Bifidobacterium animalis subsp. lactis BB-12 on hippocampal neuroinflammation and cognitive function of mice after whole brain radiotherapy. Methods:A total of sixty male C57BL/6J mice aged 7-8 weeks were randomly divided into 5 groups with 12 mice in each group: control group (Con group), probiotic group (BB-12 group), irradiation group (IR group), irradiation and Memantine group (IR+ Memantine group), irradiation and probiotic group (IR+ BB-12 group). The model of radiation-induced brain injury of mice was established by 10 Gy whole brain radiotherapy with a medical linear accelerator. Y-maze test was used to evaluate the cognitive function. The activation of microglia and astrocytes was observed by immunofluorescence staining. The expressions of inflammatory cytokines interleukin-1β (IL-1β), IL-6 and tumor necrosis factor-α (TNF-α) were detected by quantitative real-time reverse transcription polymerase chain reaction (QRT-PCR) and Western blot.Results:Y-maze test showed that, compared with Con group, the percentage of the times of reaching the novel arm in the total times of the three arms decreased significantly in the IR group ( t=5.04, P<0.05). BB-12 mitigated radiation-induced cognitive dysfunction ( t=4.72, P<0.05). Compared with Con group, the number ( t=3.05, 7.18, P<0.05) and circularity index ( t=6.23, 2.52, P<0.05) of Iba1 and GFAP positive cells were increased, the microglia and astrocytes were activated in the hippocampus of IR group, but these alterations were eliminated by BB-12. After whole brain IR, the mRNA and protein expression levels of inflammatory cytokines IL-1β, IL-6 and TNF-α in the hippocampus of mice were significantly increased compared with Con group ( tmRNA =4.10, 3.04, 4.18, P<0.05; tprotein=11.49, 7.04, 8.42, P<0.05), which were also significantly reduced by BB-12 compared with IR group ( tmRNA=4.20, 3.40, 2.84, P<0.05; tprotein=6.36, 4.03, 3.75, P<0.05). Conclusions:Bifidobacterium animalis BB-12 can suppress neuroinflammation mediated by microglia and astrocytes in the hippocampus of mice after radiotherapy and alleviates IR-induced cognitive dysfunction. Therefore, BB-12 has potential application in alleviating radiation induced brain injury.

Result Analysis
Print
Save
E-mail