Computer Engineering and Applications ›› 2023, Vol. 59 ›› Issue (8): 117-126.DOI: 10.3778/j.issn.1002-8331.2208-0182

• Pattern Recognition and Artificial Intelligence • Previous Articles     Next Articles

Face Recognition Method Based on Improved Visual Transformer

JI Ruirui, XIE Yuhui, LUO Fengkai, MEI Yuan   

  1. School of Automation and Information Engineering, Xi’an University of Technology, Xi’an 710048, China
  • Online:2023-04-15 Published:2023-04-15

改进视觉Transformer的人脸识别方法

季瑞瑞,谢宇辉,骆丰凯,梅远   

  1. 西安理工大学 自动化与信息工程学院,西安 710048

Abstract: Most face recognition methods rely on convolutional neural networks currently, which construct cascaded multi-layer processing units and fuse local features with convolution operation, result in ignoring the global semantic information and attention to the key areas of the face image. This paper proposes a face recognition method based on improved visual Transformer. Shuffle Transformer is introduced as the backbone network of feature extraction, the global information of feature map is captured through self-attention mechanism and Shuffle operation, and the long-distance dependence relationship is established between feature points to enhance the feature perception ability of the model. At the same time, considering the characteristics of ArcFace loss function and center loss function, the fusion loss is designed as the objective function, which utilizes the intra-class constraints to enlarge the angle interval and increase the discrimination of feature space. The proposed method achieves average accuracy of 99.83%, 95.87%, 90.05%, 98.05% and 97.23% on five challenging benchmark face datasets, LFW, CALFW, CPLFW, AGEDB-30 and CFP. It is proved that the improved model can effectively promote the ability of face feature extraction, and achieve better recognition effect than that of convolutional neural network in the same scale.

Key words: face recognition, visual Transformer, self-attention mechanism, ArcFace loss function

摘要: 目前大多数人脸识别方法依赖于卷积神经网络,通过级联的形式构建多层处理单元,利用卷积操作融合局部特征,忽略了人脸全局语义信息,缺乏对人脸重点区域的关注度。针对上述问题,提出一种基于改进视觉Transformer的人脸识别方法,引入Shuffle Transformer作为特征提取骨干网络,通过自注意力机制以及Shuffle操作捕捉特征图全局信息,建立特征点之间的长距离依赖关系,提高模型的特征感知能力;同时,结合ArcFace损失函数和中心损失函数的特点,设计融合损失作为目标函数,利用类内约束扩大角度间隔,提高特征空间的辨别性。该方法在LFW、CALFW、CPLFW、AgeDB-30和CFP五个具有挑战性的基准测试人脸数据集上分别取得了99.83%、95.87%、90.05%、98.05%、97.23%的平均准确率,能够有效提升人脸特征提取能力,识别效果优于同等规模卷积神经网络。

关键词: 人脸识别, 视觉Transformer, 自注意力机制, ArcFace损失函数