计算机工程与应用 ›› 2022, Vol. 58 ›› Issue (23): 24-41.DOI: 10.3778/j.issn.1002-8331.2205-0520

• 热点与综述 • 上一篇    下一篇

图像分类模型的对抗样本攻防研究综述

闫嘉乐,徐洋,张思聪,李克资   

  1. 贵州师范大学 贵州省信息与计算科学重点实验室,贵阳 550001
  • 出版日期:2022-12-01 发布日期:2022-12-01

Survey of Research on Adversarial Examples Attack and Defense in Image Classification Model

YAN Jiale, XU Yang, ZHANG Sicong, LI Kezi   

  1. Key Laboratory of Information and Computing Science of Guizhou Province, Guizhou Normal University, Guiyang 550001, China
  • Online:2022-12-01 Published:2022-12-01

摘要: 深度学习模型在图像分类领域的能力已经超越了人类,但不幸的是,研究发现深度学习模型在对抗样本面前非常脆弱,这给它在安全敏感的系统中的应用带来了巨大挑战。图像分类领域对抗样本的研究工作被梳理和总结,以期为进一步地研究该领域建立基本的知识体系,介绍了对抗样本的形式化定义和相关术语,介绍了对抗样本的攻击和防御方法,特别是新兴的可验证鲁棒性的防御,并且讨论了对抗样本存在可能的原因。为了强调在现实世界中对抗攻击的可能性,回顾了相关的工作。在梳理和总结文献的基础上,分析了对抗样本的总体发展趋势和存在的挑战以及未来的研究展望。

关键词: 图像分类, 对抗样本, 深度学习, 对抗攻击, 对抗防御

Abstract: Deep learning models have surpassed human capabilities in the field of image classification, but unfortunately, research has found that deep learning models are very vulnerable to adversarial examples attacks, which poses a great challenge for its application in security-sensitive systems. The research work on adversarial examples in the field of image classification is sorted out and summarized in order to establish a basic knowledge system to further study the field. Firstly, the formal definition of adversarial examples and related terms are introduced. Then, the methods of adversarial examples attack and defense are introduced, especially the emerging defense of certified robustness, and the possible reasons for the existence of adversarial examples are discussed. To highlight the possibility of adversarial attacks in the real world, related work is reviewed. Finally, based on summarizing and combing the literature, the general trends and challenges of adversarial examples and future research outlook are analyzed.

Key words: image classification, adversarial examples, deep learning, adversarial attack, adversarial defense