Study on multimodal models based on radiomics and deep learning for predicting acute respiratory distress syndrome in patients with acute pancreatitis
10.3760/cma.j.cn115667-20250326-00036
- VernacularTitle:基于影像组学和深度学习的多模态模型在预测急性胰腺炎并发急性呼吸窘迫综合征中的研究
- Author:
Ran TAO
1
;
Lei ZHANG
;
Yuzheng XUE
;
Yiping SHEN
;
Meiyu CHEN
;
Yu WANG
;
Minyue YIN
;
Jinzhou ZHU
Author Information
1. 东海县人民医院消化内科,连云港 222300
- Publication Type:Journal Article
- Keywords:
Acute pancreatitis;
Acute respiratory distress syndrome;
Imaging examination;
Deep learning;
Models
- From:
Chinese Journal of Pancreatology
2025;25(5):341-348
- CountryChina
- Language:Chinese
-
Abstract:
Objective:To establish and validate a multimodal model based on radiomics and deep learning for predicting acute pancreatitis (AP) complicated with acute respiratory distress syndrome (ARDS).Methods:Patients diagnosed with AP from The First Affiliated Hospital of Soochow University, Donghai County People's Hospital and Jintan Affiliated Hospital of Jiangsu University between January 2017 and December 2023 were enrolled. Based on the diagnosis of ARDS within 1 week after admission, the patients were classified into the ARDS group and the non-ARDS group. Patients in the First Affiliated Hospital of Soochow University ( n=406) was used as the training set (non-ARDS group n=212 vs ARDS group n=194), while Donghai and Jintan hospitals served as the test set ( n=175; non-ARDS group n=104 vs ARDS group n=71). Clinical data, laboratory tests and the occurrence of systemic inflammatory response syndrome (SIRS) within 24 hours after admission were collected. Scoring systems such as bedside index for severity in acute pancreatitis (BISAP), Ranson score and modified CT severity index (MCTSI) were calculated. Radiomics features were extracted from three-dimensional CT images to develop a radiomics model based on XGBoost algorithm. At the same time, a deep learning model was constructed using deep convolutional networks to extract deep features. Finally, clinical features and the predictions from the aforementioned models were integrated to establish a multimodal model based on XGBoost algorithm. To enhance model visualization, variable importance ranking and local interpretable visualization were used. The receiver operating characteristic (ROC) curves of the three models and the three scores including BISAP, Ranson and MCTSI were plotted and the area under the curves (AUCs) were calculated to evaluate the prediction performance for ARDS in AP patients, as well as sensitivity and specificity. Results:In the multimodal model for predicting ARDS in AP patients, predictions of the deep learning model and the radiomics model were the most important variables, followed by SIRS, C-reactive protein, procalcitonin, albumin, glucose, creatinine, neutrophil, and Ca 2+. In the training set, the multimodal model achieved an AUC of 0.933 for predicting ARDS in AP patients, higher than the radiomics model (0.727), the deep learning model (0.877), MCTSI (0.870), Ranson (0.620) and BISAP (0.898). In the test set, the model's AUC was 0.916 for predicting ARDS in AP patients, higher than the radiomics model (0.660), the deep learning model (0.864), MCTSI (0.851), Ranson (0.609), and BISAP (0.860). Conclusions:Based on clinical structured data, radiomics and deep learning features, the multimodal model could predict the risk of ARDS in AP patients at an early stage, whose performance is better than the single-modal models and the traditional scoring systems.