Reconstruction from CT truncated data based on dual-domain transformer coupled feature learning
10.12122/j.issn.1673-4254.2024.05.17
- VernacularTitle:基于双域Transformer耦合特征学习的CT截断数据重建模型
- Author:
Chen WANG
1
,
2
;
Mingqiang MENG
;
Mingqiang LI
;
Yongbo WANG
;
Dong ZENG
;
Zhaoying BIAN
;
Jianhua MA
Author Information
1. 南方医科大学生物医学工程学院,广东 广州 510515
2. 琶洲实验室(黄埔),广东 广州 510005
- Keywords:
CT truncation artifacts;
transformer;
deep learning;
dual-domain
- From:
Journal of Southern Medical University
2024;44(5):950-959
- CountryChina
- Language:Chinese
-
Abstract:
Objective To propose a CT truncated data reconstruction model(DDTrans)based on projection and image dual-domain Transformer coupled feature learning for reducing truncation artifacts and image structure distortion caused by insufficient field of view(FOV)in CT scanning.Methods Transformer was adopted to build projection domain and image domain restoration models,and the long-range dependency modeling capability of the Transformer attention module was used to capture global structural features to restore the projection data information and enhance the reconstructed images.We constructed a differentiable Radon back-projection operator layer between the projection domain and image domain networks to enable end-to-end training of DDTrans.Projection consistency loss was introduced to constrain the image forward-projection results to further improve the accuracy of image reconstruction.Results The experimental results with Mayo simulation data showed that for both partial truncation and interior scanning data,the proposed DDTrans method showed better performance than the comparison algorithms in removing truncation artifacts at the edges and restoring the external information of the FOV.Conclusion The DDTrans method can effectively remove CT truncation artifacts to ensure accurate reconstruction of the data within the FOV and achieve approximate reconstruction of data outside the FOV.