计算机工程与应用 ›› 2023, Vol. 59 ›› Issue (23): 191-201.DOI: 10.3778/j.issn.1002-8331.2206-0356

• 图形图像处理 • 上一篇    下一篇

利用边缘条件的多特征融合图像修复算法

欧静,文志诚,邓文贵,张姝婷   

  1. 湖南工业大学 计算机学院,湖南 株洲 412000
  • 出版日期:2023-12-01 发布日期:2023-12-01

Research on Multi-Feature Fusion Image Restoration Based on Edge Conditions

OU Jing, WEN Zhicheng, DENG Wengui, ZHANG Shuting   

  1. School of Computing, Hunan University of Technology, Zhuzhou, Hunan 412000, China
  • Online:2023-12-01 Published:2023-12-01

摘要: 针对当前图像修复领域存在的缺乏对图像损失区域深层结构的合理性推理问题,以及如何生成更加准确清晰的纹理信息提出一种基于边缘条件的多特征融合图像修复方法——MEGAN(multi-feature fusion network model based on edge condition)。模型采用两阶段生成思想,使用边缘生成对抗网络修复缺损图像的边缘信息;用完整的边缘信息帮助纹理细节网络生成完整图像。在生成器结构上添加门控卷积以减少无效像素对修复过程的干扰,带门控的多扩张卷积块(gated multi-extension convolution block,GM block)实现对待修复图像的多尺度特征提取。多尺度谱归一化马尔可夫判别器在促进生成图像的结构一致性和细节表现力的同时严格控制梯度变化幅度,从而提高模型精度,稳定训练。在celebA和Places2数据集上的测试结果显示,MEGAN在生成合理的图像结构和准确清晰的细节纹理上明显优于主流的图像修复算法。

关键词: 深度学习, 图像修复, 生成对抗网络, 门控卷积, 多特征融合

Abstract: Aiming at the lack of rational reasoning of the deep structure inside the image loss area and how to generate more accurate and detailed texture information of images in the current image inpainting field,a multi-feature fusion image inpainting method based on edge conditions is proposed, which named MEGAN(multi-feature fusion network model based on edge condition). The model adopts a two-stage generation idea. Firstly, the edge generation adversarial network is used to repair the edge information of the defective image. Secondly, the complete edge information is used to help the texture detail network to generate a complete image. A gated convolution is added to the generator structure to reduce the interference of invalid pixels on the repair process, and a gated multi-extension convolution block(GM block) is introduced to achieve multi-scale feature extraction of the image to be repaired.The multi-scale spectrally normalized Markov discriminator not only promotes structural coherence and detailed representation of generated images, but also strictly controls the magnitude of gradient variation, thereby improving model accuracy and stabilizing training. The test results on celebA and Places2 datasets show that MEGAN significantly outperforms mainstream image inpainting algorithms in generating reasonable image structure and clear detailed texture.

Key words: deep learning, image restoration, adversarial network generation, gated convolution, multi-feature fusion