计算机工程与应用 ›› 2024, Vol. 60 ›› Issue (14): 162-174.DOI: 10.3778/j.issn.1002-8331.2304-0007

• 图形图像处理 • 上一篇    下一篇

结合Pixel2style2Pixel的年龄转化方法

桂列林,黄山,印月   

  1. 四川大学 电气工程学院,成都 610065
  • 出版日期:2024-07-15 发布日期:2024-07-15

Age Transformation Method Combined with Pixel2style2Pixel

GUI Lielin, HUANG Shan, YIN Yue   

  1. College of Electrical Engineering, Sichuan University, Chengdu 610065, China
  • Online:2024-07-15 Published:2024-07-15

摘要: 年龄转化在刑侦、人脸识别等领域有着重要作用。常见的年龄转化方法需要使用成对带有年龄注释的数据集进行训练,并且存在生成图像质量低、年龄语义信息不够解耦等问题。针对上述问题,在Pix2style2Pix的训练框架中,引入年龄识别、contextual损失函数,对整体损失函数做出符合年龄转化的改进,提取年龄信息并保证图像质量。改进编码网络配合损失函数对潜在空间的图像进行编辑,提出一种基于Pixel2style2Pixel的年龄转化方法。通过FFHQ、CelebA数据集,对所提方法进行验证,实验结果表明,在不采用成对年龄注释的训练集下,改进后的损失函数能生成更符合期望年龄的转化图像,人脸相似度距离为0.346、FID为45.69、SSIM为0.593?6、PSNR为19.64?dB,均优于对比方法,证明所提方法能够生成高质量、年龄语义高度解耦的转化结果。

关键词: Pixel2style2Pixel, 人脸年龄转化, StyleGAN, 损失函数, 图像处理

Abstract: Age transformation has an important role in criminal investigation, face recognition and other fields. Common age transformation methods need to use pairs of datasets with age annotations for training, and have problems such as low quality of generated images and insufficient decoupling of age semantic information. To address the above problems, in the training framework of Pix2style2Pix, age recognition, contextual loss function is introduced to make improvements to the overall loss function in line with age transformation to extract age information and ensure image quality. Secondly, it improves the coding network with loss function to edit the image in potentials space, and proposes an age transformation method based on Pixel2style2Pixel. The proposed method is validated with FFHQ and CelebA datasets, and the experimental results show that the improved loss function can generate transformed images that better match the desired age with face similarity distance of 0.346, FID of 45.69, SSIM of 0.593?6, and PSNR of 19.64?dB, all of which are better than the comparison method under the training set without pairwise age annotation The proposed method is proved to be able to generate high-quality, highly decoupled age-semantic transformation results.

Key words: Pixel2style2Pixel, face aging, StyleGAN, loss function, image processing