计算机工程与应用 ›› 2022, Vol. 58 ›› Issue (10): 200-207.DOI: 10.3778/j.issn.1002-8331.2111-0044

• 模式识别与人工智能 • 上一篇    下一篇

基于注意力及视觉Transformer的野外人脸表情识别

罗岩,冯天波,邵洁   

  1. 1.上海电力大学 电子与信息工程学院,上海 201306
    2.国家电网上海市电力公司 信息通信公司,上海 200000
  • 出版日期:2022-05-15 发布日期:2022-05-15

Facial Expression Recognition in Wild Based on Attention and Vision Transformer

LUO Yan, FENG Tianbo, SHAO Jie   

  1. 1.School of Electronic and Information Engineering, Shanghai University of Electric Power, Shanghai 201306, China 
    2.Information and Telecommunication Branch, State Grid Shanghai Municipal Electric Power Company, Shanghai 200000, China
  • Online:2022-05-15 Published:2022-05-15

摘要: 目前的人脸表情识别更关注包含面部遮挡、图像模糊等因素的野外图像而非实验室图像,且COVID-19的流行使得人们不得不在公共场合佩戴口罩,这给表情识别任务带来了新的挑战。受启发于最近Transformer在众多计算机视觉任务上的成功,提出了基于注意力及视觉Transformer的野外人脸表情识别模型,并率先使用CSWin Transformer作为主干网络。加入通道-空间注意力模块来提高模型对于全局特征的注意力。Sub-center ArcFace损失函数被用来进一步优化模型的分类能力。在两个公开的野外表情数据集RAF-DB和FERPlus上以及它们对应的口罩遮挡数据集上对所提出的方法进行了评估,识别准确率分别为88.80%、89.31%和76.12%、72.28%,提高了表情识别精度。

关键词: 人脸表情识别, Transformer, 注意力机制, Sub-center ArcFace

Abstract: Facial expression recognition(FER) nowadays pays more attention to images in the wild that contain factors such as facial occlusion and image blurring than laboratory images, and the epidemic of COVID-19 makes people have to wear masks in public places, which brings new challenges to the task of FER. Due to the recent success of the Transformer on numerous computer vision tasks, an Attention-Transformer network is proposed, which first using CSWin Transformer as the backbone. Meanwhile, channel-spatial attention module is designed to increase the attention of the network to global features. Moreover, the Sub-center ArcFace loss function is used to further optimize the classification ability of the model. The proposed method is evaluated on two public in-the-wild facial expression datasets, RAF-DB and FERPlus, and their corresponding masked datasets. In addition, their accuracy rates are 88.80% and 89.31% on RAF-DB and FERPlus datasets, and the accuracy rates are 76.12% and 72.28% on their masked datasets. In a nutshell, the results demonstrate that the model performs superior to the state-of-the-art methods.

Key words: facial expression recognition, Transformer, attention mechanism, Sub-center ArcFace