Feasibility study of three-dimensional nnU-Net deep learning network for automatic segmentation of colorectal cancer based on abdominal CT images
10.3760/cma.j.cn112149-20231231-00505
- VernacularTitle:三维nnU-Net深度学习网络基于腹部CT图像自动分割结直肠癌的可行性研究
- Author:
Kaiyi ZHENG
1
;
Hao WU
;
Wenjing YUAN
;
Ziqi JIA
;
Xiangliang TAN
;
Xiaohui DUAN
;
Zhibo WEN
;
Xian LIU
;
Weicui CHEN
Author Information
1. 南方医科大学生物医学工程学院,广州 510515
- Keywords:
Colorectal neoplasms;
Tomography, X-ray computed;
Deep learning;
Artificial intelligence;
Neural network
- From:
Chinese Journal of Radiology
2024;58(8):829-835
- CountryChina
- Language:Chinese
-
Abstract:
Objective:To investigate the feasibility of a three-dimensional no new U-Net (3D nnU-Net) deep learning (DL) network for the automatic segmentation of colorectal cancer (CRC) based on abdominal CT images.Methods:This was a cross-sectional study. From January 2018 to May 2023, a total of 2180 primary CRC patients, confirmed by pathology at the Guangdong Provincial Hospital of Traditional Chinese Medicine (center 1, n=777), Nanfang Hospital, Southern Medical University (center 2, n=732), and Sun Yat-sen Memorial Hospital (center 3, n=671), were enrolled in this retrospective study. The baseline abdominal CT examination of each patient was conducted using CT equipment from 7 different models across 4 vendors, at the 3 centers, encompassing both the arterial phase (AP) and venous phase (VP). Two radiologists manually delineated the volume of interest to circumscribe the entire tumors in dual-enhanced phase CT images. The CT data of CRC patients from center 1 and center 3 were merged and divided into a training set ( n=1 159) and a validation set ( n=289) using a weighted random method with a ratio of 4∶1. The patients from center 2 were used as an independent external test set ( n=732). The 3D nnU-Net segmentation model was trained and tested. Using manually annotated label data as the benchmark, segmentation performance of the model was evaluated based on different phases and tumor locations. The segmentation coverage rate (SCR), Dice similarity coefficient (DSC), recall (REC), precision (PRE), F1-score, and 95% Hausdorff distance (HD 95) were calculated. The mean manual segmentation time and the mean automatic time were compared using independent samples t-test. Results:In the independent external test set, the performance of the 3D nnU-Net model based on the AP CT images was superior to that based on the VP CT images. On the AP images, the SCR, DSC, REC, PRE, F1-score, and HD 95 were 0.865, 0.714, 0.716, 0.736, 0.714, and 27.228, respectively; on the VP images, they were 0.834, 0.679, 0.710, 0.675, 0.679, and 29.358, respectively. The model achieved the best performance on right-sided colon cancer, with SCR, DSC, REC, PRE, F1-score, and HD95 on the AP CT images at 0.901, 0.775, 0.780, 0.787, 0.775, and 21.793, respectively. Next were left-sided colon cancer and rectal cancer, while the segmentation performance for transverse colon cancer was the worst (SCR, DSC, REC, PRE, F1-score, and HD 95 were 0.731, 0.631, 0.641, 0.630, 0.631 and 38.721, respectively). The automatic segmentation time on a single phase was (1.0±0.3) min, while the manual segmentation time was (17.5±6.0) min ( t=128.24, P<0.001). Conclusions:After training and validating on a dataset from multiple centers with various CT scanner vendors, the 3D nnU-Net DL model demonstrates the capability to automatically segment CRC based on abdominal CT images, while also showcasing commendable robustness and generalization ability.