1.Expression of CD147 protein in human cervical carcinoma
Xi ZHOU ; Jing JI ; Yi WU ; Pengsheng ZHENG
Journal of Xi'an Jiaotong University(Medical Sciences) 2010;31(1):14-16,25
Objective To investigate the expression of CD147 and its significance in human cervical carcinoma. Methods Western blotting and immunohistochemical staining were used to detect CD147 expression in cervical cancer or normal cervix uteri tissues. Results CD147 protein was expressed in all of cervical carcinomas (41/41, 100.0%) and most of normal cervix uteri tissues (11/12, 91.7%). CD147 with different molecular weights were present in cervical tissues. The percentage of CD147-positive cells and the expressional level of CD147 were higher in cervical carcinomas than in normal cervix (P<0.05). Conclusion CD147 might be recognized as a marker of cell proliferation. High expression of CD147 in cervical carcinomas suggests that it might be a potential target for cervical carcinoma therapy.
2. Beneficial effects of Schisandrin B on the cardiac structure and function in a mice model of myocardial infarction
Pengsheng CHEN ; Jia LIU ; Haoyu MENG ; Peng WU ; Zhijian YANG
Chinese Journal of Cardiology 2017;45(11):963-970
Objective:
To investigate whether Schisandrin B (Sch B) could improve cardiac structure and function in myocardial infarction (MI) mice and related mechanisms.
Methods:
Male C57BL/6J mice were randomized into sham (
3.Deep Learning-Based Segmentation of Extra-Pelvic Organs and Metastases in Advanced Prostate Cancer Based on MET-RADS-P
Xiang LIU ; Xuelei QUBIE ; Jingyun WU ; Pengsheng WU ; Xiaodong ZHANG ; Xiaoying WANG
Chinese Journal of Medical Imaging 2024;32(2):168-174
Purpose To explore the feasibility of the deep learning-based segmentation of extra-pelvic region and metastases in advanced prostate cancer based on metastasis reporting and data system for prostate cancer(MET-RADS-P).Materials and Methods Four datasets(68,91,57 and 263 patients with head,neck,chest and abdomen metastases,respectively)from Jan 2017 to Jan 2022 in Peking University First Hospital were retrospectively collected for the development of the classification model of scanning range and segmentation model of different regions and metastases according to the scanning sites(head,neck,chest and abdomen).In addition,90 patients with prostate cancer confirmed by pathology and underwent whole-body MRI were collected for external validation of the developed model.The manual annotation of the regions and metastases were used as the"reference standard"for the model evaluation.The evaluation indexes included dice similarity coefficient(DSC)and volumetric similarity(VS).Results In the external validation set,the classification accuracy of head,neck,chest and abdomen were 100%(90/90),98.89%(89/90),96.67%(87/90)and 94.44%(85/90),respectively.The range of DSC,VS values of the segmentation model for organs in different regions were(0.86±0.10)-(0.99±0.01),(0.89±0.10)-(0.99±0.01),respectively.The range of DSC,VS values of the segmentation model for metastases in different regions were(0.65±0.07)-(0.72±0.13),(0.74±0.04)-(0.82±0.13),respectively.Conclusion The 3D U-Net model based on deep learning may achieve the segmentation of extra-pelvic region and metastasis in advanced prostate cancer.
4.3D ResNet deep learning model for automatically identifying sequences of prostate multi-parametric MRI:A multicenter study
Zhaonan SUN ; Kexin WANG ; Wenpeng HUANG ; Pengsheng WU ; Xiaodong ZHANG ; Xiaoying WANG
Chinese Journal of Medical Imaging Technology 2024;40(5):769-773
Objective To construct a 3D ResNet deep learning model based on multi-parametric prostate MRI(mpMRI),and to observe its value for automatically identifying the main MR sequences.Methods Totally 1 153 sets pre-biopsy prostate mpMRI data of 1 086 patients who underwent ultrasound-guided prostate biopsy in 3 hospitals were collected and divided into different image datasets,i.e.T2WI,diffusion-weighted imaging(DWI)and apparent diffusion coefficient(ADC)maps with a total of 5 151 images.Then the images were categorized into non-fat-suppressed T2WI(T2WI_nan,n=1 000),fat-suppressed T2WI(T2WI_fs,n=1 188),high b-value DWI(DWI_High,b-value≥500 s/mm2,n=1 045),low b-value DWI(DWI_Low,b-value<500 s/mm2,n=1 012)or ADC map(n=906),also divided into training set(n=4 122),verification set(n=513)and test set(n=516)at the ratio of 8∶1∶1.After preprocessing and augmentation,a 3D ResNet model for automatically identifying image categories was trained and optimized in the training and verification sets,and its classification efficiency was evaluated in the test set.Results The identifying accuracy,sensitivity,specificity,positive predictive value,negative predictive value,F1 score and Kappa value of the obtained model for automatically identifying categories of images in the test set was 0.995-1.000,0.990-1.000,0.998-1.000,0.990-1.000,0.998-1.000,0.995-1.000 and 0.994-1.000,respectively.Conclusion The obtained 3D ResNet deep learning model could effectively and automatically identify the main sequences of prostate mpMRI.
5.3D V-Net deep learning model for automatic segmentation of prostate on T2WI and apparent diffusion coefficient maps
Zhaonan SUN ; Jiangkai HE ; Kexin WANG ; Wenpeng HUANG ; Pengsheng WU ; Xiaodong ZHANG ; Xiaoying WANG
Chinese Journal of Medical Imaging Technology 2024;40(9):1426-1431
Objective To develop a 3D V-Net deep learning segmentation model based on multi-center data,and to evaluate its value for automatic segmentation of prostate on T2WI and apparent diffusion coefficient(ADC)maps.Methods Totally 2 894 sets of multi-parametric MRI data of 2 673 patients with clinically suspected prostate cancer from 3 medical centers within 1 month before biopsy were retrospectively collected.Finally 5 974 sets axial images were enrolled,including 3 654 sets of T2WI and 2 320 sets of ADC maps.Prostate contours were manually annotated layer by layer on axial T2WI and ADC maps,and the left-to-right,anterior-to-posterior,superior-to-inferior diameters and volume of prostate were measured and taken as reference standards.The images were divided into training set(n=4 780,including 2 907 sets of T2WI and 1 873 sets of ADC map),verification set(n=601,including 384 sets of T2WI and 217 sets of ADC map)and test set(n=593,including 363 sets of T2WI and 230 sets of ADC map)at the ratio of 8:1:1.After preprocessing and augmentation,3D V-Net was used to construct and train the segmentation model based on training and verification sets,and the segmentation performance of the model was evaluated in test set using Dice similarity coefficient(DSC),Jaccard coefficient(JACARD)and volume similarity(VS),respectively.The parameters measured with the model were compared with the reference standards,and the correlations were explored.Results Compared with the corresponding ADC maps,DSC,JACARD and VS of the model for automatic segmentation of prostate on T2WI in test set were all higher(all P<0.001).The left-to-right,anterior-to-posterior and superior-to-inferior diameters of prostate measured with the model on both T2WI and ADC maps were all larger than the reference standards(all P<0.001),while no significant difference of the volume was found(both P>0.05).All parameters measured with the model on T2WI and ADC maps were positively correlated with reference standards(rs=0.794-0.985).Conclusion 3D V-Net deep learning model could automatically segment prostate on T2WI and ADC maps with high accuracy,and its efficiency based on T2WI was better than that based on ADC maps.
6.A preliminary study on automatic measurement of abduction angle after total hip arthroplasty using artificial intelligence algorithm on antero-posterior radiographs
Kexin WANG ; Xiaodong ZHANG ; Pengsheng WU ; Jialun LI ; Daojian ZHANG ; He WANG
Journal of Practical Radiology 2024;40(1):140-144
Objective To explore the feasibility of automating the measurement of abduction angle after total hip arthroplasty(THA)on postoperative radiographs by using deep learning algorithms.Methods The data were retrospectively collected.A total of 381 cases were used to develop deep learning model.Two radiologists annotated the key points on the images(lateral-superior point and medial-inferior point of acetabular cups,tear drops).The data was split into training dataset(304 cases),tuning dataset(38 cases),and test dataset(39 cases).A 2D U-net model was trained to segment the key points and the abduction angle were automatically meas-ured.After development of the model,an external validation dataset was collected(143 cases).Dice similarity coefficient(DSC)and mean absolute error(MAE)were used to evaluate the prediction efficiency of the model in the test dataset and the external validation dataset.Bland-Altman test was used to analyze the agreement between the abduction angle measured automatically by the model and the physician measurement.Results The DSC were 0.870-0.905 and 0.690-0.750 in the test dataset and the external validation dataset,and the corresponding MAE were 0.311-0.561 and 0.951-1.310.For the result of Bland-Altman analysis,only 6.52%(3/46)and 2.08%(3/144)of the abduction angle measurements in the test dataset and external validation dataset were outside the 95%limit of agreement(LoA).In the qualitative evaluation of the abduc-tion angle,the agreement of the model with the physician were 97.8%and 90.3%in the test dataset and the external validation dataset.Conclusion It is feasible to use deep learning algorithms to automatically measure the abduction angle after THA on X-ray images,achieving similar accuracy to that of physician.