Computer Engineering and Applications ›› 2020, Vol. 56 ›› Issue (18): 150-156.DOI: 10.3778/j.issn.1002-8331.1907-0119

Previous Articles     Next Articles

Facial Expression Generative Adversarial Networks Based on Facial Action Coding System

HU Xiaorui, LIN Jingyi, LI Dong, ZHANG Yun   

  1. School of Automation, Guangdong University of Technology, Guangzhou 510000, China
  • Online:2020-09-15 Published:2020-09-10

基于面部动作编码系统的表情生成对抗网络

胡晓瑞,林璟怡,李东,章云   

  1. 广东工业大学 自动化学院,广州 510000

Abstract:

Using the vector containing facial expression information as the input conditional to guide the generation of high-authenticity facial images is one of the important research topics, but the commonly used eight expression labels are relatively single, in order to reflect the abundant micro-expression information, the facial expression generative adversarial networks based on Facial Action Coding System(FACS) is proposed, in which each facial muscle group is used as Action Units(AUs). The attention mechanism is integrated into the encoding and decoding generation module, the network focuses on the local area and makes pertinent changes. An objective function based on reconstruction loss, classification loss and attention smoothing loss of discriminant module is used. The experimental results on common BP4D face datasets show that this method can pay more effective attention to the corresponding region location of each action unit and use a single AU label to control the expression generation, and the continuous value of AU labels can control the expression amplitude. Compared with other methods, the details of facial expression images generated by this method are clearer and more authentic.

Key words: facial expression generation, generative adversarial networks, facial action coding system

摘要:

用含有面部表情信息的向量作为输入条件指导生成高真实性人脸图像是一个重要的研究课题,但常用的八类表情标签较为单一,为更好地反映人脸各处丰富的微表情信息,以面部各个肌肉群作为动作单元(AUs),提出一种基于面部动作编码系统(FACS)的人脸表情生成对抗网络。将注意力机制融合到编码解码生成模块中,网络更加集中关注局部区域并针对性做出生成改变,使用了一种基于判别模块重构误差、分类误差和注意力平滑损失的目标函数。在常用BP4D人脸数据集上的实验结果表明,该方法可以更有效地关注各个动作单元对应区域位置并用单个AU标签控制表情生成,且连续AU标签值大小能控制表情幅度强弱,与其他方法相比,该方法所生成的表情图像细节保留更清晰且真实性更高。

关键词: 人脸表情生成, 生成对抗网络, 面部动作编码系统