Development and validation of a multi-modality fusion deep learning model for differentiating glioblastoma from solitary brain metastases
10.11817/j.issn.1672-7347.2024.230248
- VernacularTitle:区分胶质母细胞瘤和单发性脑转移瘤的多模态融合深度学习模型的开发和验证
- Author:
Shanshan SHEN
1
,
2
;
Chunquan LI
;
Yaohua FAN
;
Shanfu LU
;
Ziye YAN
;
Hu LIU
;
Haihang ZHOU
;
Zijian ZHANG
Author Information
1. 嘉兴市第二医院肿瘤内科,浙江 嘉兴 314000
2. 南昌大学信息工程学院,南昌 330031
- Keywords:
glioblastoma;
solitary brain metastases;
deep learning;
multimodality data;
disentanglement
- From:
Journal of Central South University(Medical Sciences)
2024;49(1):58-67
- CountryChina
- Language:Chinese
-
Abstract:
Objective:Glioblastoma(GBM)and brain metastases(BMs)are the two most common malignant brain tumors in adults.Magnetic resonance imaging(MRI)is a commonly used method for screening and evaluating the prognosis of brain tumors,but the specificity and sensitivity of conventional MRI sequences in differential diagnosis of GBM and BMs are limited.In recent years,deep neural network has shown great potential in the realization of diagnostic classification and the establishment of clinical decision support system.This study aims to apply the radiomics features extracted by deep learning techniques to explore the feasibility of accurate preoperative classification for newly diagnosed GBM and solitary brain metastases(SBMs),and to further explore the impact of multimodality data fusion on classification tasks. Methods:Standard protocol cranial MRI sequence data from 135 newly diagnosed GBM patients and 73 patients with SBMs confirmed by histopathologic or clinical diagnosis were retrospectively analyzed.First,structural T1-weight,T1C-weight,and T2-weight were selected as 3 inputs to the entire model,regions of interest(ROIs)were manually delineated on the registered three modal MR images,and multimodality radiomics features were obtained,dimensions were reduced using a random forest(RF)-based feature selection method,and the importance of each feature was further analyzed.Secondly,we used the method of contrast disentangled to find the shared features and complementary features between different modal features.Finally,the response of each sample to GBM and SBMs was predicted by fusing 2 features from different modalities. Results:The radiomics features using machine learning and the multi-modal fusion method had a good discriminatory ability for GBM and SBMs.Furthermore,compared with single-modal data,the multimodal fusion models using machine learning algorithms such as support vector machine(SVM),Logistic regression,RF,adaptive boosting(AdaBoost),and gradient boosting decision tree(GBDT)achieved significant improvements,with area under the curve(AUC)values of 0.974,0.978,0.943,0.938,and 0.947,respectively;our comparative disentangled multi-modal MR fusion method performs well,and the results of AUC,accuracy(ACC),sensitivity(SEN)and specificity(SPE)in the test set were 0.985,0.984,0.900,and 0.990,respectively.Compared with other multi-modal fusion methods,AUC,ACC,and SEN in this study all achieved the best performance.In the ablation experiment to verify the effects of each module component in this study,AUC,ACC,and SEN increased by 1.6%,10.9%and 15.0%,respectively after 3 loss functions were used simultaneously. Conclusion:A deep learning-based contrast disentangled multi-modal MR radiomics feature fusion technique helps to improve GBM and SBMs classification accuracy.