CT and MRI fusion based on generative adversarial network and convolutional neural networks under image enhancement.
10.7507/1001-5515.202209050
- Author:
Yunpeng LIU
1
;
Jin LI
2
;
Yu WANG
1
;
Wenli CAI
3
;
Fei CHEN
2
;
Wenjie LIU
1
;
Xianhao MAO
1
;
Kaifeng GAN
4
;
Renfang WANG
2
;
Dechao SUN
5
;
Hong QIU
2
;
Bangquan LIU
5
Author Information
1. Information and Computing Science Major, School of International Exchange, Ningbo University of Technology, Ningbo, Zhejiang 315000, P. R. China.
2. Zhejiang Wanli University, Ningbo, Zhejiang 315000, P. R. China.
3. Radiology Imaging Laboratory, Harvard Medical School, Boston, Massachusetts 02114, USA.
4. Li Huili Hospital Affiliated to Ningbo University, Ningbo, Zhejiang 315000, P. R. China.
5. School of Digital Technology and Engineering, Ningbo University of Finance & Economics, Ningbo, Zhejiang 315000, P. R. China.
- Publication Type:Journal Article
- Keywords:
Deep learning;
Generative adversarial network;
Image enhancement;
Image fusion;
Medical image
- MeSH:
Image Processing, Computer-Assisted/methods*;
Neural Networks, Computer;
Tomography, X-Ray Computed;
Magnetic Resonance Imaging/methods*;
Algorithms
- From:
Journal of Biomedical Engineering
2023;40(2):208-216
- CountryChina
- Language:Chinese
-
Abstract:
Aiming at the problems of missing important features, inconspicuous details and unclear textures in the fusion of multimodal medical images, this paper proposes a method of computed tomography (CT) image and magnetic resonance imaging (MRI) image fusion using generative adversarial network (GAN) and convolutional neural network (CNN) under image enhancement. The generator aimed at high-frequency feature images and used double discriminators to target the fusion images after inverse transform; Then high-frequency feature images were fused by trained GAN model, and low-frequency feature images were fused by CNN pre-training model based on transfer learning. Experimental results showed that, compared with the current advanced fusion algorithm, the proposed method had more abundant texture details and clearer contour edge information in subjective representation. In the evaluation of objective indicators, Q AB/F, information entropy (IE), spatial frequency (SF), structural similarity (SSIM), mutual information (MI) and visual information fidelity for fusion (VIFF) were 2.0%, 6.3%, 7.0%, 5.5%, 9.0% and 3.3% higher than the best test results, respectively. The fused image can be effectively applied to medical diagnosis to further improve the diagnostic efficiency.