Computer Engineering and Applications ›› 2023, Vol. 59 ›› Issue (24): 98-109.DOI: 10.3778/j.issn.1002-8331.2208-0086

• Pattern Recognition and Artificial Intelligence • Previous Articles     Next Articles

Palm Vein Recognition Network Combining Transformer and CNN

WU Kai, SHEN Wenzhong, JIA Dingding, LIANG Juan   

  1. School of Electronic Information Engineering, Shanghai University of Electric Power, Shanghai 201200, China
  • Online:2023-12-15 Published:2023-12-15

融合Transformer和CNN的手掌静脉识别网络

吴凯,沈文忠,贾丁丁,梁娟   

  1. 上海电力大学 电子与信息工程学院,上海 201200

Abstract: Aiming at the low accuracy of palm vein feature extraction and recognition, it proposes a palm vein recognition network PVCodeNet. It designs an improved BasicBlock and Transformer Encoder, and uses AAM-loss(additional angular margin loss) to expand decision boundary. It successfully applies Transformer Encoder to global feature extraction of palm vein firstly. Improved BasicBlock uses Do-Conv to replace Conv for feature extraction, it makes extracted features more distinctive. it also adds standardized attention module NAM, its detailed features of in channel and spatial domain are extracted by applying heavy sparsity penalty to suppress weights of insignificant features. This paper describes in detail the key point location, ROI extraction and image enhancement, then makes detailed experiments on feature vector dimension and AAM-loss parameter setting. Finally, ablation experiments are carried out on PolyU database and selfbuilt database SEPAD-PV, EER reaches 0, it achieves a breakthrough in the highest recognition rate. In order to verify the generalization performance of network, it is also verifies on the palmprint database Tongji and the finger vein database SDUMLA with similar texture features. EER is far superior to other mainstream algorithms, which fully proves the superiority of this algorithm.

Key words: palm vein recognition, Transformer encoder, Do-Conv, normalization-based attention module(NAM), additive angular margin loss(AAM-Loss)

摘要: 针对手掌静脉特征提取识别精度不高问题,提出了掌静脉识别网络PVCodeNet。该网络设计了改进的BasicBlock和Transformer Encoder模块结合并运用扩大决策边界的损失函数AAM-Loss(additive angular margin loss)。该网络首次将Transformer Encoder模块成功用于掌静脉图像全局特征提取,改进的BasicBlock使用深度超参数化卷积Do-Conv取代传统卷积Conv进行特征提取使提取的特征更加具有区分性,该模块还加入规一化的注意力机制NAM模块,通过应用权重稀疏性惩罚项抑制不显著性特征的权值来提取图像在通道和空间域上重要的细节特征。在手掌关键点定位、ROI提取、图像增强方面作了详细描述,在特征向量维度、AAM-Loss参数设置方面做了详细实验,在PolyU数据库和自建库SEPAD-PV数据库上进行消融实验测试,EER均达到了0,成功实现了最高识别率的突破。为了验证该网络的泛化性能,还在具有相似纹理特征的掌纹数据库Tongji和指静脉数据库SDUMLA上进行验证,EER远远优于其他主流算法,充分证明了提出算法的优越性。

关键词: 手掌静脉识别, Transformer编码模块, 深度超参数化卷积(Do-Conv), 规一化注意力机制(NAM), 扩大决策边界的损失函数(AAM-Loss)