Full-size diffusion model for adaptive feature medical image fusion.
10.7507/1001-5515.202412050
- Author:
Jing DI
1
;
Shuhui SHI
1
;
Heran WANG
1
;
Chan LIANG
1
;
Yunlong ZHU
1
Author Information
1. School of Electronic & Information Engineering, Lanzhou Jiaotong University, Lanzhou 730070, P. R. China.
- Publication Type:Journal Article
- Keywords:
Adaptive fusion rules;
Full-scale diffusion model;
Medical image fusion;
Multi-scale denoising U-network;
Multiscale feature integration module
- MeSH:
Humans;
Image Processing, Computer-Assisted/methods*;
Algorithms;
Neural Networks, Computer;
Diagnostic Imaging/methods*;
Image Interpretation, Computer-Assisted/methods*
- From:
Journal of Biomedical Engineering
2025;42(5):871-882
- CountryChina
- Language:Chinese
-
Abstract:
To address issues such as loss of detailed information, blurred target boundaries, and unclear structural hierarchy in medical image fusion, this paper proposes an adaptive feature medical image fusion network based on a full-scale diffusion model. First, a region-level feature map is generated using a kernel-based saliency map to enhance local features and boundary details. Then, a full-scale diffusion feature extraction network is employed for global feature extraction, alongside a multi-scale denoising U-shaped network designed to fully capture cross-layer information. A multi-scale feature integration module is introduced to reinforce texture details and structural information extracted by the encoder. Finally, an adaptive fusion scheme is applied to progressively fuse region-level features, global features, and source images layer by layer, enhancing the preservation of detail information. To validate the effectiveness of the proposed method, this paper validates the proposed model on the publicly available Harvard dataset and an abdominal dataset. By comparing with nine other representative image fusion methods, the proposed approach achieved improvements across seven evaluation metrics. The results demonstrate that the proposed method effectively extracts both global and local features of medical images, enhances texture details and target boundary clarity, and generates fusion image with high contrast and rich information, providing more reliable support for subsequent clinical diagnosis.