1.Expression levels and clinical significance of serum miR-497 and miR-383 in patients with esophageal cancer
Fang WAN ; Gang YANG ; Rui LI ; Qijing WAN
Journal of International Oncology 2024;51(4):204-209
Objective:To observe the serum levels of miR-497 and miR-383 in patients with esophageal cancer, and to analyze the clinical significance for esophageal cancer.Methods:Esophageal cancer patients admitted to Union Wuhan Red Cross Hospital from July 2018 to February 2020 were collected as the esophageal cancer group ( n=96), which were divided into a recurrence group ( n=29) and a non recurrence group ( n=67) based on follow-up results. The control group included healthy individuals who underwent physical examinations at this hospital during the corresponding period ( n=83), and the benign lesion group included patients with benign esophageal lesions who underwent treatment at same hospital during the corresponding period ( n=78). Real time fluorogenic quantitative PCR was applied to detect serum levels of miR-497 and miR-383 in each group; Pearson method was applied to analyze the correlation between serum levels of miR-497 and miR-383 in esophageal cancer patients; Relationship between serum levels of miR-497, miR-383 and clinical pathological characteristics in esophageal cancer patients was analyzed; Receiver operator characteristic (ROC) curve was plotted to analyze the efficacy of serum miR-497, miR-383, and their combination in predicting the prognosis of esophageal cancer patients. Results:The serum miR-497 levels in the control group, benign lesion group and the esophageal cancer group were 1.01±0.18, 0.86±0.15, 0.77±0.14, respectively, and the serum miR-383 levels were 1.02±0.21, 0.95±0.15, 0.84±0.15, respectively, with statistically significant differences ( F=52.59, P<0.001; F=25.12, P<0.001) ; There were statistically significant differences in serum miR-497 and miR-383 levels between any two of the three groups (all P<0.05). Pearson correlation analysis reveled that serum miR-497 level was positively correlated with miR-383 level in esophageal cancer patients ( r=0.46, P<0.001). There were statistically significant differences in serum miR-497 and miR-383 levels among esophageal cancer patients with different tumor diameters ( t=6.58, P<0.001; t=5.06, P<0.001), lymph node metastases ( t=5.55, P<0.001; t=4.63, P<0.001) and TNM stages ( t=5.00, P<0.001; t=2.75, P<0.001). The serum miR-497 (0.83±0.15 vs. 0.62±0.11, t=6.78, P<0.001) and miR-383 (0.91±0.16 vs. 0.67±0.13, t=7.12, P<0.001) levels in the non recurrence group were both higher than those in the recurrence group. ROC curve showed that the critical value of serum miR-497 for predicting the prognostic situation of esophageal cancer was 0.72, with a sensitivity of 68.97%, a specificity of 68.66%, and an area under the curve (AUC) of 0.756; The critical value of serum miR-383 was 0.84, with a sensitivity of 82.76%, a specificity of 67.16%, and an AUC of 0.827; The sensitivity of the combination of the two was 79.31%, the specificity was 85.07%, and the AUC was 0.899; The combination of the two had a better prediction effect than that of serum miR-497 ( Z=3.31, P=0.001) and miR-383 ( Z=2.51, P=0.012) alone. Conclusion:The serum levels of miR-497 and miR-383 in patients with esophageal cancer are lower than those in healthy individuals and patients with benign lesions, which are related to tumor diameter, lymph node metastasis, TNM stage, and prognosis. The combined detection of the two has certain predictive value for the prognosis of esophageal cancer.
2.Retinal image quality assessment based on FA-Net
Cheng WAN ; Qijing YOU ; Jing SUN ; Jianxin SHEN ; Qiuli YU
Chinese Journal of Experimental Ophthalmology 2019;37(8):608-612
Objective To propose a deep learning-based retinal image quality classification network, FA-Net,to make convolutional neural network ( CNN) more suitable for image quality assessment in eye disease screening system. Methods The main network of FA-Net was composed of VGG-19. On this basis,attention mechanism was added to the CNN. By using transfer learning method in training, the weight of ImageNet was used to initialize the network. The attention net is based on foreground extraction by extracting the blood vessel and suspected regions of lesion and assigning higher weights to region of interest to enhance the learning of these important areas. Results Total of 2894 fundus images were used for training FA-Net. FA-Net achieved 97. 65% classification accuracy on a test set containing 2170 fundus images,with the sensitivity and specificity of 0. 978 and 0. 960,respectively,and the area under curve(AUC) was 0. 995. Conclusions Compared with other CNNs,the proposed FA-Net has better classification performance and can evaluate retinal fundus image quality more accurately and efficiently. The network takes into account the human visual system ( HVS) and human attention mechanism. By adding attention module into the VGG-19 network structure, the classification results can be better interpreted as well as better classification performance.
3.Evaluation of multi-classification method of color fundus photograph quality based on ResNet50-OC
Cheng WAN ; Xueting ZHOU ; Qijing YOU ; Jianxin SHEN ; Qiuli YU
Chinese Journal of Experimental Ophthalmology 2021;39(9):785-790
Objective:To evaluate the efficiency of ResNet50-OC model based on deep learning for multiple classification of color fundus photographs.Methods:The proprietary dataset (PD) collected in July 2018 in BenQ Hospital of Nanjing Medical University and EyePACS dataset were included.The included images were classified into five types of high quality, underexposure, overexposure, blurred edges and lens flare according to clinical ophthalmologists.There were 1 000 images (800 from EyePACS and 200 from PD) for each type in the training dataset and 500 images (400 from EyePACS and 100 from PD) for each type in the testing dataset.There were 5 000 images in the training dataset and 2 500 images in the testing dataset.All images were normalized and augmented.The transfer learning method was used to initialize the parameters of the network model, on the basis of which the current mainstream deep learning classification networks (VGG, Inception-resnet-v2, ResNet, DenseNet) were compared.The optimal network ResNet50 with best accuracy and Micro F1 value was selected as the main network of the classification model in this study.In the training process, the One-Cycle strategy was introduced to accelerate the model convergence speed to obtain the optimal model ResNet50-OC.ResNet50-OC was applied to multi-class classification of fundus image quality.The accuracy and Micro F1 value of multi-classification of color fundus photographs by ResNet50 and ResNet50-OC were evaluated.Results:The multi-classification accuracy and Micro F1 values of color fundus photographs of ResNet50 were significantly higher than those of VGG, Inception-resnet-v2, ResNet34 and DenseNet.The accuracy of multi-classification of fundus photographs in the ResNet50-OC model was 98.77% after 15 rounds of training, which was higher than 98.76% of the ResNet50 model after 50 rounds of training.The Micro F1 value of multi-classification of retinal images in ResNet50-OC model was 98.78% after 15 rounds of training, which was the same as that of ResNet50 model after 50 rounds of training.Conclusions:The proposed ResNet50-OC model can be accurate and effective in the multi-classification of color fundus photograph quality.One-Cycle strategy can reduce the frequency of training and improve the classification efficiency.
4.Evaluation of low-quality fundus image enhancement based on cycle-constraint adversarial network
Xueting ZHOU ; Weihua YANG ; Xiao HUA ; Qijing YOU ; Jing SUN ; Jianxin SHEN ; Cheng WAN
Chinese Journal of Experimental Ophthalmology 2021;39(9):769-775
Objective:To propose and evaluate the cycle-constraint adversarial network (CycleGAN) for enhancing the low-quality fundus images such as the blurred, underexposed and overexposed etc.Methods:A dataset including 700 high-quality and 700 low-quality fundus images selected from the EyePACS dataset was used to train the image enhancement network in this study.The selected images were cropped and uniformly scaled to 512×512 pixels.Two generative models and two discriminative models were used to establish CycleGAN.The generative model generated matching high/low-quality images according to the input low/high-quality fundus images, and the discriminative model determined whether the image was original or generated.The algorithm proposed in this study was compared with three image enhancement algorithms of contrast limited adaptive histogram equalization (CLAHE), dynamic histogram equalization (DHE), and multi-scale retinex with color restoration (MSRCR) to perform qualitative visual assessment with clarity, BRISQUE, hue and saturation as quantitative indicators.The original and enhanced images were applied to the diabetic retinopathy (DR) diagnostic network to diagnose, and the accuracy and specificity were compared.Results:CycleGAN achieved the optimal results on enhancing the three types of low-quality fundus images including the blurred, underexposed and overexposed.The enhanced fundus images were of high contrast, rich colors, and with clear optic disc and blood vessel structures.The clarity of the images enhanced by CycleGAN was second only to the CLAHE algorithm.The BRISQUE quality score of the images enhanced by CycleGAN was 0.571, which was 10.2%, 7.3%, and 10.0% higher than that of CLAHE, DHE and MSRCR algorithms, respectively.CycleGAN achieved 103.03 in hue and 123.24 in saturation, both higher than those of the other three algorithms.CycleGAN took only 35 seconds to enhance 100 images, only slower than CLAHE.The images enhanced by CycleGAN achieved accuracy of 96.75% and specificity of 99.60% in DR diagnosis, which were higher than those of oringinal images.Conclusions:CycleGAN can effectively enhance low-quality blurry, underexposed and overexposed fundus images and improve the accuracy of computer-aided DR diagnostic network.The enhanced fundus image is helpful for doctors to carry out pathological analysis and may have great application value in clinical diagnosis of ophthalmology.