1.Learning curve and analysis of curative effects after balloon pulmonary angioplasty for chronic thromboembolic pulmonary hypertension.
Cheng HONG ; Linna HU ; Haimin LIU ; Xiaofeng WU ; Jianmin LU ; Jiangpeng LIN ; Wenliang GUO ; Xishi SUN ; Jielong LIN ; Riken CHEN ; Zhenzhen ZHENG
Chinese Medical Journal 2023;136(1):99-101
2.Prediction, analysis and application of learning curve of tooth preparation for all ceramic crowns of maxillary central incisors.
Si Yu WU ; Ya Ning LI ; Xiao ZHANG ; Long Wei LV ; Yun Song LIU ; Hong Qiang YE ; Yong Sheng ZHOU
Journal of Peking University(Health Sciences) 2023;55(1):108-113
OBJECTIVE:
To predict the learning curve of tooth preparation for all ceramic crowns of maxillary central incisors on phantom head simulators for graduate students participating in standardized dental resident training based on the modified Wright learning curve model, then to analyze and applicate the learning curve.
METHODS:
Twelve graduate students participating in standardized dental resident training were selected to prepare the resin maxillary central incisors on phantom head simulators for all ceramic crowns 4 times. The results of preparation were evaluated by 3 prosthetic experts with at least 10 years' experience focusing on the reduction, contour, taper, shoulder, finish line, margin placement, adjacent tooth injury, and preparation time for tooth preparation. The learning rate of tooth preparation was calculated by scores of tooth preparation of 4 times. The learning curve of tooth preparation was predicted based on the modified Wright learning curve model. According to the criteria of standardized training skill examinations for dental residents in Beijing, 80 was taken as the qualified standard score. The minimum training times for tooth preparation to satisfy the qualified standard score (80) was calculated, to analyze the characteristics of learning curve and evaluate the effectiveness of tooth preparation.
RESULTS:
The scores of 4 tooth preparation were 64.03±7.80, 71.40±6.13, 74.33±5.96, and 75.98±4.52, respectively. The learning rate was (106±4)%, which showed the learning curve an upward trend. There were no significant differences between the qualified standard score and the predicted scores of tooth preparation from the 5th preparation to the 13th preparation (P > 0.05). The predicted score of the 14th preparation was higher than the qualified standard score (P < 0.05).
CONCLUSION
The trend of the learning curve of tooth preparation for all ceramic crowns of maxillary central incisors on phantom head simulators for graduate students participating in standardized dental resident training is upward, which predicts the minimum training times higher than the qualified standard score is 14 times.
Humans
;
Tooth Preparation, Prosthodontic/methods*
;
Incisor
;
Learning Curve
;
Crowns
;
Tooth Preparation
;
Ceramics
;
Dental Porcelain
;
Dental Prosthesis Design
3.Automated Classification of Inherited Retinal Diseases in Optical Coherence Tomography Images Using Few-shot Learning.
Qi ZHAO ; Si Wei MAI ; Qian LI ; Guan Chong HUANG ; Ming Chen GAO ; Wen Li YANG ; Ge WANG ; Ya MA ; Lei LI ; Xiao Yan PENG
Biomedical and Environmental Sciences 2023;36(5):431-440
OBJECTIVE:
To develop a few-shot learning (FSL) approach for classifying optical coherence tomography (OCT) images in patients with inherited retinal disorders (IRDs).
METHODS:
In this study, an FSL model based on a student-teacher learning framework was designed to classify images. 2,317 images from 189 participants were included. Of these, 1,126 images revealed IRDs, 533 were normal samples, and 658 were control samples.
RESULTS:
The FSL model achieved a total accuracy of 0.974-0.983, total sensitivity of 0.934-0.957, total specificity of 0.984-0.990, and total F1 score of 0.935-0.957, which were superior to the total accuracy of the baseline model of 0.943-0.954, total sensitivity of 0.866-0.886, total specificity of 0.962-0.971, and total F1 score of 0.859-0.885. The performance of most subclassifications also exhibited advantages. Moreover, the FSL model had a higher area under curves (AUC) of the receiver operating characteristic (ROC) curves in most subclassifications.
CONCLUSION
This study demonstrates the effective use of the FSL model for the classification of OCT images from patients with IRDs, normal, and control participants with a smaller volume of data. The general principle and similar network architectures can also be applied to other retinal diseases with a low prevalence.
Humans
;
Tomography, Optical Coherence
;
Deep Learning
;
Retinal Diseases/diagnostic imaging*
;
Retina/diagnostic imaging*
;
ROC Curve
4.Prediction of pulp exposure risk of carious pulpitis based on deep learning.
Li WANG ; Fei WU ; Mo XIAO ; Yu-Xin CHEN ; Ligeng WU
West China Journal of Stomatology 2023;41(2):218-224
OBJECTIVES:
This study aims to predict the risk of deep caries exposure in radiographic images based on the convolutional neural network model, compare the prediction results of the network model with those of senior dentists, evaluate the performance of the model for teaching and training stomatological students and young dentists, and assist dentists to clarify treatment plans and conduct good doctor-patient communication before surgery.
METHODS:
A total of 206 cases of pulpitis caused by deep caries were selected from the Department of Stomatological Hospital of Tianjin Medical University from 2019 to 2022. According to the inclusion and exclusion criteria, 104 cases of pulpitis were exposed during the decaying preparation period and 102 cases of pulpitis were not exposed. The 206 radiographic images collected were randomly divided into three groups according to the proportion: 126 radiographic images in the training set, 40 radiographic images in the validation set, and 40 radiographic images in the test set. Three convolutional neural networks, visual geometry group network (VGG), residual network (ResNet), and dense convolutional network (DenseNet) were selected to analyze the rules of the radiographic images in the training set. The radiographic images of the validation set were used to adjust the super parameters of the network. Finally, 40 radiographic images of the test set were used to evaluate the performance of the three network models. A senior dentist specializing in dental pulp was selected to predict whether the deep caries of 40 radiographic images in the test set were exposed. The gold standard is whether the pulp is exposed after decaying the prepared hole during the clinical operation. The prediction effect of the three network models (VGG, ResNet, and DenseNet) and the senior dentist on the pulp exposure of 40 radiographic images in the test set were compared using receiver operating characteristic (ROC) curve, area under the ROC curve (AUC), accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and F1 score to select the best network model.
RESULTS:
The best network model was DenseNet model, with AUC of 0.97. The AUC values of the ResNet model, VGG model, and the senior dentist were 0.89, 0.78, and 0.87, respectively. Accuracy was not statistically different between the senior dentist (0.850) and the DenseNet model (0.850)(P>0.05). Kappa consistency test showed moderate reliability (Kappa=0.6>0.4, P<0.05).
CONCLUSIONS
Among the three convolutional neural network models, the DenseNet model has the best predictive effect on whether deep caries are exposed in imaging. The predictive effect of this model is equivalent to the level of senior dentists specializing in dental pulp.
Humans
;
Deep Learning
;
Neural Networks, Computer
;
Pulpitis/diagnostic imaging*
;
Reproducibility of Results
;
ROC Curve
;
Random Allocation
5.Comparison of the learning curve of robot -assisted and laparoscopic -assisted gastrectomy.
Jingmao XIE ; Yang LEI ; Hao ZHANG ; Yihui LIU ; Bo YI
Journal of Central South University(Medical Sciences) 2023;48(5):716-724
OBJECTIVES:
Da Vinci robot technology is widely used in clinic,with minimally invasive surgery development. This study aims to explore the possible influence of advanced surgical robotics on the surgery learning curve by comparing the initial clinical learning curves of 2 different surgical techniques: robotic-assisted gastrectomy (RAG) and laparoscopic-assisted gastrectomy (LAG).
METHODS:
From September 2017 to December 2020, a chief surgeon completed a total of 108 cases of radical gastric cancer from the initial stage, including 27 cases of RAG of the Da Vinci Si robotic system (RAG group) and 81 cases of LAG (LAG group). The lymph node of gastric cancer implemented by the Japanese treatment guidelines of gastric cancer. The surgical results, postoperative complications, oncology results and learning curve were analyzed.
RESULTS:
There was no significant difference in general data, tumor size, pathological grade and clinical stage between the 2 groups (P>0.05). The incidence of serious complications in the RAG group was lower than the LAG group (P=0.003). The intraoperative blood loss in the RAG group was lower than that in the LAG group (P=0.046). The number of lymph nodes cleaned in the RAG group was more (P=0.003), among which there was obvious advantage in lymph node cleaning in the No.9 group (P=0.038) and 11p group (P=0.015). The operation time of the RAG group was significantly longer than the LAG group (P=0.015). The analysis of learning curve found that the cumulative sum analysis (CUSUM) value of the RAG group decreased from the 10th case, while the CUSUM of the LAG group decreased from the 28th case. The learning curve of the RAG group had fewer closing cases than that of the LAG group. The unique design of the surgical robot might help to improve the surgical efficiency and shorten the surgical learning curve.
CONCLUSIONS
Advanced robotics helps experienced surgeons quickly learn to master RAG skills. With the help of robotics, RAG are superior to LAG in No.9 and 11p lymph node dissection and surgical trauma reduction. RAG can clear more lymph nodes than LAG, and has better perioperative effect.
Humans
;
Robotics
;
Robotic Surgical Procedures/methods*
;
Learning Curve
;
Stomach Neoplasms/pathology*
;
Retrospective Studies
;
Laparoscopy/methods*
;
Lymph Node Excision/methods*
;
Gastrectomy/methods*
;
Treatment Outcome
6.Safety and learning curve of Da Vinci robotic single-anastomosis duodenal-ileal bypass with sleeve gastrectomy in the treatment of obesity patients.
Lun WANG ; Yu Hui ZHAO ; Ze Yu WANG ; Yang YU ; Jin Fa WANG ; Tao JIANG
Chinese Journal of Gastrointestinal Surgery 2022;25(5):454-461
Objective: To investigate the safety and learning curve of Da Vinci robotic single-anastomosis duodenal-ileal bypass with sleeve gastrectomy (SADI-S) in the treatment of obesity patients. Methods: A descriptive case series study was performed. Clinical data of obesity patients who were treated with Da Vinci robotic SADI-S in China-Japan Union Hospital of Jilin University from March 2020 to May 2021 were analyzed retrospectively. Case inclusion criteria: (1) uncomplicated obese patients with body mass index (BMI)≥37.5 kg/m(2); (2) patients with BMI of 28 to <37.5 kg/m(2) complicated with type 2 diabetes or two metabolic syndrome components, or obesity comorbidities; (3) patients undergoing SADI-S by Da Vinci robotic surgery system. Those who received other bariatric procedures other than SADI-S or underwent Da Vince robotic SADI-S as revisional operation were excluded. A total of 77 patients were enrolled in the study, including 31 males and 46 females, with median age of 33 (18-59) years, preoperative body weight of (123.0±26.2) kg, BMI of (42.2±7.1) kg/m(2) and waistline of (127.6±16.3) cm. According to the order of operation date, the patients were numbered as 1-77. The textbook outcome (TO) and Clavien-Dindo grading standard were used to analyze the clinical outcome of each patient and to classify surgical complications, respectively. The standard of textbook outcome was as follows: the operative time less than or equal to the 75th percentile of the patient's operation time (210 min); the postoperative hospital stay less than or equal to the 75th percentile of the patient's postoperative hospital stay (7 d); complication grade lower than Clavien grade II; no readmission; no conversion to laparotomy or death. The patient undergoing robotic SADI-S was considered to meet the TO standard when meeting the above 5 criteria. The TO rate was calculated by cumulative sum analysis (CUSUM) method. The curve was drawn by case number as X-axis and CUSUM (TO rate) as Y-axis so as to understand the learning curve of robotic SADI-S. Results: The operative time of 77 robotic SADI-S was (182.9±37.5) minutes, and the length of postoperative hospital stay was 6 (4-55) days. There was no conversion to laparotomy or death. Seven patients suffered from complications (7/77, 9.1%). Four patients had grade II complications (5.2%), including one with duodeno-ileal anastomotic leakage, one with abdominal bleeding, one with peritoneal effusion and one with delayed gastric emptying; two patients were grade IIIb complications (2.6%) and both of them were diagnosed with gastric leakage; one patient was grade IV complication diagnosed with postoperative respiratory failure (1.3%), and all of them were cured successfully. A total of 51 patients met the textbook outcome standard, and the TO rate was positive and was steadily increasing after the number of surgical cases accumulated to the 46th case. Taking the 46th case as the boundary, all the patients were divided into learning stage group (n=46) and mastery stage group (n=31). There were no significant differences between the two groups in terms of gender, age, weight, body mass index, waist circumference, ASA classification, standard liver volume, operative time and morbidity of postoperative complication (all P>0.05). The percent of abdominal drainage tube in learning stage group was higher than that in mastery stage group (54.3% versus 16.1%, P<0.05). The length of postoperative hospital stay in learning stage group was longer than that in mastery stage group [6 (4-22) d versus 6 (5-55) d, P<0.05)]. Conclusion: The Da Vinci robotic SADI-S is safe and feasible with a learning curve of 46 cases.
Adult
;
Anastomosis, Surgical
;
Diabetes Mellitus, Type 2/surgery*
;
Female
;
Gastrectomy/methods*
;
Gastric Bypass/adverse effects*
;
Humans
;
Learning Curve
;
Male
;
Middle Aged
;
Obesity/surgery*
;
Obesity, Morbid/surgery*
;
Retrospective Studies
;
Robotic Surgical Procedures
7.A deep-learning model for the assessment of coronary heart disease and related risk factors via the evaluation of retinal fundus photographs.
Yao Dong DING ; Yang ZHANG ; Lan Qing HE ; Meng FU ; Xin ZHAO ; Lu Ke HUANG ; Bin WANG ; Yu Zhong CHEN ; Zhao Hui WANG ; Zhi Qiang MA ; Yong ZENG
Chinese Journal of Cardiology 2022;50(12):1201-1206
Objective: To develop and validate a deep learning model based on fundus photos for the identification of coronary heart disease (CHD) and associated risk factors. Methods: Subjects aged>18 years with complete clinical examination data from 149 hospitals and medical examination centers in China were included in this retrospective study. Two radiologists, who were not aware of the study design, independently evaluated the coronary angiography images of each subject to make CHD diagnosis. A deep learning model using convolutional neural networks (CNN) was used to label the fundus images according to the presence or absence of CHD, and the model was proportionally divided into training and test sets for model training. The prediction performance of the model was evaluated in the test set using monocular and binocular fundus images respectively. Prediction efficacy of the algorithm for cardiovascular risk factors (e.g., age, systolic blood pressure, gender) and coronary events were evaluated by regression analysis using the area under the receiver operating characteristic curve (AUC) and R2 correlation coefficient. Results: The study retrospectively collected 51 765 fundus images from 25 222 subjects, including 10 255 patients with CHD, and there were 14 419 male subjects in this cohort. Of these, 46 603 fundus images from 22 701 subjects were included in the training set and 5 162 fundus images from 2 521 subjects were included in the test set. In the test set, the deep learning model could accurately predict patients' age with an R2 value of 0.931 (95%CI 0.929-0.933) for monocular photos and 0.938 (95%CI 0.936-0.940) for binocular photos. The AUC values for sex identification from single eye and binocular retinal fundus images were 0.983 (95%CI 0.982-0.984) and 0.988 (95%CI 0.987-0.989), respectively. The AUC value of the model was 0.876 (95%CI 0.874-0.877) with either monocular fundus photographs and AUC value was 0.885 (95%CI 0.884-0.888) with binocular fundus photographs to predict CHD, the sensitivity of the model was 0.894 and specificity was 0.755 with accuracy of 0.714 using binocular fundus photographs for the prediction of CHD. Conclusion: The deep learning model based on fundus photographs performs well in identifying coronary heart disease and assessing related risk factors such as age and sex.
Humans
;
Male
;
Retrospective Studies
;
Deep Learning
;
Fundus Oculi
;
ROC Curve
;
Algorithms
;
Risk Factors
;
Coronary Disease/diagnostic imaging*
8.Comparison among Four Deep Learning Image Classification Algorithms in AI-based Diatom Test.
Yong-Zheng ZHU ; Ji ZHANG ; Qi CHENG ; Hui-Xiao YU ; Kai-Fei DENG ; Jian-Hua ZHANG ; Zhi-Qiang QIN ; Jian ZHAO ; Jun-Hong SUN ; Ping HUANG
Journal of Forensic Medicine 2022;38(1):31-39
OBJECTIVES:
To select four algorithms with relatively balanced complexity and accuracy among deep learning image classification algorithms for automatic diatom recognition, and to explore the most suitable classification algorithm for diatom recognition to provide data reference for automatic diatom testing research in forensic medicine.
METHODS:
The "diatom" and "background" small sample size data set (20 000 images) of digestive fluid smear of corpse lung tissue in water were built to train, validate and test four convolutional neural network (CNN) models, including VGG16, ResNet50, InceptionV3 and Inception-ResNet-V2. The receiver operating characteristic curve (ROC) of subjects and confusion matrixes were drawn, recall rate, precision rate, specificity, accuracy rate and F1 score were calculated, and the performance of each model was systematically evaluated.
RESULTS:
The InceptionV3 model achieved much better results than the other three models with a balanced recall rate of 89.80%, a precision rate of 92.58%. The VGG16 and Inception-ResNet-V2 had similar diatom recognition performance. Although the performance of diatom recall and precision detection could not be balanced, the recognition ability was acceptable. ResNet50 had the lowest diatom recognition performance, with a recall rate of 55.35%. In terms of feature extraction, the four models all extracted the features of diatom and background and mainly focused on diatom region as the main identification basis.
CONCLUSIONS
Including the Inception-dependent model, which has stronger directivity and targeting in feature extraction of diatom. The InceptionV3 achieved the best performance on diatom identification and feature extraction compared to the other three models. The InceptionV3 is more suitable for daily forensic diatom examination.
Algorithms
;
Deep Learning
;
Diatoms
;
Humans
;
Neural Networks, Computer
;
ROC Curve
9.Deep learning applied to two-dimensional color Doppler flow imaging ultrasound images significantly improves diagnostic performance in the classification of breast masses: a multicenter study.
Teng-Fei YU ; Wen HE ; Cong-Gui GAN ; Ming-Chang ZHAO ; Qiang ZHU ; Wei ZHANG ; Hui WANG ; Yu-Kun LUO ; Fang NIE ; Li-Jun YUAN ; Yong WANG ; Yan-Li GUO ; Jian-Jun YUAN ; Li-Tao RUAN ; Yi-Cheng WANG ; Rui-Fang ZHANG ; Hong-Xia ZHANG ; Bin NING ; Hai-Man SONG ; Shuai ZHENG ; Yi LI ; Yang GUANG
Chinese Medical Journal 2021;134(4):415-424
BACKGROUND:
The current deep learning diagnosis of breast masses is mainly reflected by the diagnosis of benign and malignant lesions. In China, breast masses are divided into four categories according to the treatment method: inflammatory masses, adenosis, benign tumors, and malignant tumors. These categorizations are important for guiding clinical treatment. In this study, we aimed to develop a convolutional neural network (CNN) for classification of these four breast mass types using ultrasound (US) images.
METHODS:
Taking breast biopsy or pathological examinations as the reference standard, CNNs were used to establish models for the four-way classification of 3623 breast cancer patients from 13 centers. The patients were randomly divided into training and test groups (n = 1810 vs. n = 1813). Separate models were created for two-dimensional (2D) images only, 2D and color Doppler flow imaging (2D-CDFI), and 2D-CDFI and pulsed wave Doppler (2D-CDFI-PW) images. The performance of these three models was compared using sensitivity, specificity, area under receiver operating characteristic curve (AUC), positive (PPV) and negative predictive values (NPV), positive (LR+) and negative likelihood ratios (LR-), and the performance of the 2D model was further compared between masses of different sizes with above statistical indicators, between images from different hospitals with AUC, and with the performance of 37 radiologists.
RESULTS:
The accuracies of the 2D, 2D-CDFI, and 2D-CDFI-PW models on the test set were 87.9%, 89.2%, and 88.7%, respectively. The AUCs for classification of benign tumors, malignant tumors, inflammatory masses, and adenosis were 0.90, 0.91, 0.90, and 0.89, respectively (95% confidence intervals [CIs], 0.87-0.91, 0.89-0.92, 0.87-0.91, and 0.86-0.90). The 2D-CDFI model showed better accuracy (89.2%) on the test set than the 2D (87.9%) and 2D-CDFI-PW (88.7%) models. The 2D model showed accuracy of 81.7% on breast masses ≤1 cm and 82.3% on breast masses >1 cm; there was a significant difference between the two groups (P < 0.001). The accuracy of the CNN classifications for the test set (89.2%) was significantly higher than that of all the radiologists (30%).
CONCLUSIONS:
The CNN may have high accuracy for classification of US images of breast masses and perform significantly better than human radiologists.
TRIAL REGISTRATION
Chictr.org, ChiCTR1900021375; http://www.chictr.org.cn/showproj.aspx?proj=33139.
Area Under Curve
;
Breast/diagnostic imaging*
;
Breast Neoplasms/diagnostic imaging*
;
China
;
Deep Learning
;
Humans
;
ROC Curve
;
Sensitivity and Specificity

Result Analysis
Print
Save
E-mail