计算机工程与应用 ›› 2021, Vol. 57 ›› Issue (18): 135-141.DOI: 10.3778/j.issn.1002-8331.2005-0411

• 网络、通信与安全 • 上一篇    下一篇

基于通用对抗扰动的图像验证码保护方法

舒乐,戴佳筑   

  1. 上海大学 计算机工程与科学学院,上海 200444
  • 出版日期:2021-09-15 发布日期:2021-09-13

Image-Based CAPTCHA’s Protection Method Based on Universal Adversarial Perturbations

SHU Le, DAI Jiazhu   

  1. School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China
  • Online:2021-09-15 Published:2021-09-13

摘要:

卷积神经网络的发展使得图像验证码已经不再安全。基于卷积神经网络中存在的通用对抗扰动,提出了一种图像验证码的保护方法。提出了一种快速生成通用对抗扰动的算法,将方向相似的对抗扰动向量进行叠加以加快生成通用对抗扰动的速度。基于此算法设计了图像验证码的保护方案,将通用对抗扰动加入到验证码的图像中使其无法被卷积神经网络模型识别。在ImageNet数据集上进行的仿真实验结果表明,该方案比现有工作DeepCAPTCHA具有更低的破解率,能有效保护图像验证码不被主流的卷积神经网络模型破解。

关键词: 深度学习, 对抗样本, 通用对抗扰动, 图像验证码, 卷积神经网络, 图像分类

Abstract:

The development of convolutional neural networks makes image-based CAPTCHA no longer safe. Based on the universal adversarial perturbation in convolutional neural networks, an image-based CAPTCHA’s protection method is proposed. An algorithm is proposed for quickly generating universal adversarial perturbations by aggregating adversarial perturbation vectors that have similar orientation. Then, an image-based CAPTCHA’s protection scheme is designed by adding universal adversarial perturbations to the images to protect CAPTCHA from recognizing by convolutional neural networks. Experimental results on the ImageNet dataset show that the scheme has a lower cracking rate than DeepCAPTCHA, and can protect the image-based CAPTCHA from being cracked by state-of-the-art convolutional neural networks.

Key words: deep learning, adversarial example, universal adversarial perturbation, image-based CAPTCHA, convolutional neural network, image classification