Computer Engineering and Applications ›› 2023, Vol. 59 ›› Issue (4): 243-251.DOI: 10.3778/j.issn.1002-8331.2110-0124

• Graphics and Image Processing • Previous Articles     Next Articles

Global Illumination Rendering Based on Generative Adversarial Model and Light Decomposition

LIANG Xiao, WANG Niting, WANG Jingwen, OUYANG Jiao   

  1. School of Computer Science, Southwest Petroleum University, Chengdu 610500, China
  • Online:2023-02-15 Published:2023-02-15

基于生成对抗模型及光路分解的全局光照绘制

梁晓,王妮婷,王静雯,欧阳娇   

  1. 西南石油大学 计算机科学学院,成都 610500

Abstract: A global illumination rendering network based on generative adversarial model and light decomposition is proposed to solve the problem of fuzzy high-frequency features in global illumination image reconstruction. It learns the abstract representation of light transport from graphics attributes and encodes such representation to predict global illumination image. Firstly, it decomposes light into diffuse and specular components, and uses independent generative adversarial networks to learn and infer light subgraphs end-to-end, which avoids the mutual interference of mixed lighting and ensure the clear reproduction of high-frequency details. Secondly, the rendering network uses auto-encoder as fundamental network to generate images, and adds multi-scale feature fusion blocks to extract features from different receptive field to express complicated special effects convincingly, such as shadow, secondary reflection, and so on. Thirdly, two enhanced adversarial loss functions, rotation loss and feature loss, are used to increase the stability of network training. The results show that, compared with denoising algorithms and image generative models, this method can reconstruct more realistic global illumination image with more high-frequency details and increase PSNR by 8%~20%.

Key words: global illumination rendering, light decomposition, generative adversarial network(GAN), auto-encoder, multi-scale fusion

摘要: 针对现有全局光照图像重建高频特征效果模糊的问题,提出一种基于生成对抗模型及光路分解的全局光照绘制网络,以各类图形辅助属性(法线、深度、粗糙度等)为主要输入,学习光照传输的抽象表示并编码,用于推理光照图像。第一,将光照解耦为漫反射和镜面反射两部分,设计独立的生成对抗网络端到端地学习和推理光照子图,避免混频光照的相互干扰,保证高频细节的清晰重现。第二,使用自编码器作为绘制网络的基本结构,添加多尺度特征融合模块用于不同感受野下的特征合成,以促进阴影、镜面反射等复杂特效的有效表达。第三,使用旋转损失和特征损失两种增强的对抗损失函数,增加网络训练的稳定性。实验结果表明,与现有降噪或图像生成模型相比,该方法能够有效地生成视觉上更逼真的全局光照图像,保留更多高频细节,PSNR指标提升8%~20%。

关键词: 全局光照绘制, 光路分解, 生成对抗网络, 自编码器, 多尺度融合