Computer Engineering and Applications ›› 2022, Vol. 58 ›› Issue (1): 218-223.DOI: 10.3778/j.issn.1002-8331.2008-0325

• Graphics and Image Processing • Previous Articles     Next Articles

Cross-Modality PET Synthesis Method Based on Residual and Adversarial Networks

XIAO Chenchen, CHEN Legeng, WANG Shuqiang   

  1. 1.School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China
    2.Research Center for Biomedical Information Techology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
  • Online:2022-01-01 Published:2022-01-06

融合残差和对抗网络的跨模态PET图像合成方法

肖晨晨,陈乐庚,王书强   

  1. 1.桂林电子科技大学 电子工程与自动化学院,广西 桂林 541004
    2.中国科学院 深圳先进技术研究院 数字所生物医学信息技术研究中心,广东 深圳 518055

Abstract: In order to solve the problem that the existing cross-modality image synthesis methods cannot capture the spatial and structural information of human tissues well, and the synthesized image has fuzzy edges and low signal-to-noise ratio, a cross-modality PET synthesis method based on residual module and generative adversarial networks is proposed. The algorithm introduces an improved residual inception module and attention mechanism into the generator network to enhance the feature learning ability and reduce the number of parameters. Multiscale discriminator is used to improve the discrimination performance. Multi-scale structural similarity loss is introduced into the loss function to better preserve the contrast information of the image. This algorithm is compared with several existing methods on the ADNI dataset. The experimental results show that the MAE of synthetic PET image is decreased, and the SSIM and PSNR are improved. Comparisons with the existing methods demonstrate that the improved model can retain the structure information of the image better and improve the quality of the synthetic image both visually and objectively.

Key words: cross-modality image synthesis, generative adversarial network, residual inception module, multi-scale discriminator

摘要: 针对现有跨模态图像合成方法不能很好地捕获人体组织的空间信息与结构信息,合成的图像具有边缘模糊、信噪比低等问题,提出一种融合残差模块和生成对抗网络的跨模态PET图像合成方法。该算法在生成器网络中引入改进的残差初始模块和注意力机制,减少参数量的同时增强了生成器的特征学习能力。判别器采用多尺度判别器,以提升判别性能。损失函数中引入多层级结构相似损失,以更好地保留图像的对比度信息。该算法在ADNI数据集上与主流算法进行对比,实验结果表明,合成PET图像的MAE指标有所下降,SSIM与PSNR指标有所提升。实验结果显示,提出的模型能很好地保留图像的结构信息,在视觉和客观指标上都能提高合成图像的质量。

关键词: 跨模态图像合成, 生成对抗网络, 残差初始模块, 多尺度判别器