Application of deep learning-based multimodal imaging to the automatic segmentation of glioblastoma targets for radiotherapy
10.3760/cma.j.cn112271-20220331-00133
- VernacularTitle:基于深度学习的多模态影像脑胶质母细胞瘤放疗靶区的自动勾画研究
- Author:
Suqing TIAN
1
;
Xin XU
;
Yuliang JIANG
;
Yinglong LIU
;
Zhuojie DAI
;
Wei ZHANG
;
Lecheng JIA
;
Junjie WANG
Author Information
1. 北京大学第三医院肿瘤放疗科,北京 100191
- Keywords:
Glioblastoma;
Automatic segmentation;
3D convolutional network;
Multimodal imaging
- From:
Chinese Journal of Radiological Medicine and Protection
2022;42(9):697-703
- CountryChina
- Language:Chinese
-
Abstract:
Objective:To explore the effects of multimodal imaging on the performance of automatic segmentation of glioblastoma targets for radiotherapy based on a deep learning approach.Methods:The computed tomography (CT) images and the contrast-enhanced T1 weighted (T1C) sequence and the T2 fluid attenuated inversion recovery (T2- FLAIR) sequence of magnetic resonance imaging (MRI) of 30 patients with glioblastoma were collected. The gross tumor volumes (GTV) and their corresponding clinical target volumes CTV1 and CTV2 of the 30 patients were manually delineated according to the criteria of the Radiation Therapy Oncology Group (RTOG). Moreover, four different datasets were designed, namely a unimodal CT dataset (only containing the CT sequences of 30 cases), a multimodal CT-T1C dataset (containing the CT and T1C sequences of 30 cases), a multimodal CT-T2-FLAIR dataset (containing the CT and T2- FLAIR sequences of the 30 cases), and a trimodal CT-MRI dataset (containing the CT, T1C, and T2- FLAIR sequences of 30 cases). For each dataset, the data of 25 cases were used for training the modified 3D U-Net model, while the data of the rest five cases were used for testing. Furthermore, this study evaluated the segmentation performance of the GTV, CTV1, and CTV2 of the testing cases obtained using the 3D U-Net model according to the indices including Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and relative volume error (RVE).Results:The best automatic segmentation result of GTV were achieved using the CT-MRI dataset. Compared with the segmentation result using the CT dataset (DSC: 0.94 vs. 0.79, HD95: 2.09 mm vs. 12.33 mm, and RVE: 1.16% vs. 20.14%), there were statistically significant differences in DSC ( t=3.78, P<0.05) and HD95 ( t=4.07, P<0.05) obtained using the CT-MRI dataset. Highly consistent automatic segmentation result of CTV1 and CTV2 were also achieved using the CT-MRI dataset (DSC: 0.90 vs. 0.91, HD95: 3.78 mm vs. 2.41 mm, RVE: 3.61% vs. 5.35%). However, compared to the CT dataset, there were no statistically significant differences in DSC and HD95 of CTV1 and CTV2 ( P>0.05). Additionally, the 3D U-Net model yielded some errors in predicting the upper and lower bounds of GTV and the adjacent organs (e.g., the brainstem and eyeball) of CTV2. Conclusions:The modified 3D U-Net model based on the multimodal CT-MRI dataset can achieve better segmentation result of glioblastoma targets and its application potentially benefits clinical practice.