计算机工程与应用 ›› 2020, Vol. 56 ›› Issue (5): 34-42.DOI: 10.3778/j.issn.1002-8331.1909-0228

• 热点与综述 • 上一篇    下一篇

攻击分类器的对抗样本生成技术的现状分析

叶启松,戴旭初   

  1. 中国科学技术大学 网络空间安全学院,合肥 230026
  • 出版日期:2020-03-01 发布日期:2020-03-06

State of the Art on Adversarial Example Generation Methods for Attacking Classifier

YE Qisong, DAI Xuchu   

  1. School of Cyberspace Security, University of Science and Technology of China, Hefei 230026, China
  • Online:2020-03-01 Published:2020-03-06

摘要:

对抗样本生成技术是近年来深度学习应用于安全领域的一个热点,主要是研究对抗样本生成的机理、方法和实现方法,其目的是为了更好地理解和应对深度学习系统的脆弱性和安全性问题。重点关注深度神经网络分类器的对抗样本生成技术,介绍了对抗样本的概念,按“攻击条件”和“攻击目标”,将分类器的攻击分为四大类,分别是白盒条件下的定向攻击、白盒条件下的非定向攻击、黑盒条件下的定向攻击和黑盒条件下的非定向攻击。在此基础上,深入分析了每种攻击下典型的对抗样本生成技术,包括基本思想、方法和实现算法,并从适用场景、优点和缺点三个角度对它们进行了比较。通过对研究现状的分析,表明了对抗样本生成技术的多样性、规律性,以及不同生成技术的共性和差异性,为进一步研究和发展对抗样本生成技术,提高深度学习系统的安全性,提供有益的参考。

关键词: 深度学习, 安全, 对抗样本, 攻击, 分类器, 脆弱性

Abstract:

Adversarial example generation is a hot topic in the security field of deep learning recently. It mainly investigates the mechanism, method and implementation of adversarial example generation, which is helpful to better understand and deal with the vulnerability and security of the deep learning system. Emphasis is placed on the adversarial example generation methods for deep neural network classifier. Firstly, the concept of adversarial example is introduced. Then, according to attack conditions and attack targets, the attacks for deep neural network classifier are classified into four categories, including the targeted attack under white-box condition, the non-targeted attack under white-box condition, the targeted attack under black-box condition and the non-targeted attack under black-box condition.Moreover, some typical methods for adversarial example generation are characterized, such as their basic ideas, methods, and implementation algorithms. Furthermore, from the perspectives of applicable scenario, advantage and disadvantage, the comparison of these methods is also presented. Through the analysis of the state of the art, it is shown that there exist the diversity and regularity in those adversarial example generation methods, as well as the similarities and differences among different methods, which can provide a useful help for further developing the adversarial example generation technology and improving the security of deep learning systems.

Key words: deep learning, security, adversarial example, attack, classifier, vulnerability