Automatic target volume tracking in magnetic resonance imaging-guided radiotherapy based on artificial intelligence
10.3760/cma.j.cn112271-20240711-00257
- VernacularTitle:基于人工智能的磁共振引导放疗靶区目标自动追踪研究
- Author:
Yiling WANG
1
;
Yue ZHAO
1
;
Qiuhan LIU
1
;
Jie WANG
1
;
Yu FAN
1
Author Information
1. 四川省肿瘤医院·研究所 放射肿瘤学四川省重点实验室 四川省肿瘤临床医学研究中心 四川省癌症防治中心 电子科技大学附属肿瘤医院放射治疗中心,成都 610041
- Publication Type:Journal Article
- Keywords:
Magnetic resonance image-guided radiotherapy;
Deep learning;
Cine magnetic resonance image;
Target volume tracing
- From:
Chinese Journal of Radiological Medicine and Protection
2025;45(6):558-565
- CountryChina
- Language:Chinese
-
Abstract:
Objective:To explore the feasibility of automatic target volume tracing in the Elekta Unity magnetic resonance imaging (MRI)-guided radiotherapy system and to further enhance the real-time target volume tracing performance of MRI-guided radiotherapy by introducing the deep learning technology based on a large Transformer model.Methods:A total of 4 661 frames of cine MRI binary images from 75 patients with malignant tumors in the chest/abdomen who were treated with MRI-guided radiotherapy were retrospectively collected as a training set. Another 500 frames of cine MRI binary images from 10 patients were collected as an independent test set. A module for medical image format conversion was developed to convert binary images into medical meta-images. The outer contours of tumor target volumes in the cine MRI images of the test set were manually delineated as actual control labels. With the first frame of the cine MRI images of each patient as the reference image and the other frames as motion images, a Transformer-based deep learning model was constructed to describe the deformable vector field (DVF) of motion images relative to the reference image. The Dice similarity coefficient (DSC), the 95% Hausdorff distance (HD 95), the negative Jacobian determinant (NegJ), and the average processing time per frame of cine MRI images were calculated. These values were compared to those of the conventional B-Spline scheme to quantitatively assess the target volume tracing accuracy, DVF physical plausibility, and execution efficiency of the Transformer-based deep learning model. Results:The Transformer-based deep learning model constructed in this study delivered superior target volume tracing performance, with improved DSC [(0.84 ± 0.05) vs. (0.74 ± 0.16), t = 11.44, P < 0.05] and HD 95 [(9.25 ± 2.98) vs. (14.70 ± 8.55) mm, t = -11.83, P < 0.05]. Furthermore, this model reduced the average image processing time from 1.95 s to 30.99 ms, enhancing the efficiency by two orders of magnitude. Besides, this model yielded NegJ similar to that of the B-Spline scheme. This suggests that the DVF extracted using this model had comparable physical plausibility with that obtained using the B-Spline scheme. Conclusions:The Transformer-based deep learning model for automatic target volume tracing fills the functional gap of the Elekta Unity MRI-guided radiotherapy system, facilitating relatively accurate, efficient automatic tracing of moving tumor targets in the chest and abdomen.