Computer Engineering and Applications ›› 2021, Vol. 57 ›› Issue (23): 61-70.DOI: 10.3778/j.issn.1002-8331.2108-0147

• Research Hotspots and Reviews • Previous Articles     Next Articles

Summary of Adversarial Examples Techniques Based on Deep Neural Networks

BAI Zhixu, WANG Hengjun, GUO Kexiang   

  1. Strategic Support Force Information Engineering University, Zhengzhou 450001, China
  • Online:2021-12-01 Published:2021-12-02



  1. 战略支援部队信息工程大学,郑州 450001


Deep learning has shown amazing capabilities in completing several extremely difficult tasks, but deep neural networks can hardly avoid misclassifying examples with deliberately added perturbations(called “adversarial examples”).“Adversarial examples” are becoming a popular research topic in the security field of deep learning. Studying the causes and mechanisms of adversarial examples can help optimize models in terms of security and robustness. Based on the principle of adversarial examples, the classical adversarial example attack methods are classified and summarized, and the attack methods are divided into two major categories: white-box attacks and black-box attacks, and subcategories such as non-specific target attacks, specific target attacks, full-pixel additive perturbation attacks and partial-pixel additive perturbation attacks are introduced. Several typical attack methods are reproduced on the ImageNet dataset, and the experimental results are used to compare the advantages and disadvantages of several generation methods, analyze the outstanding problems in the generation of the adversarial examples, and make an outlook on the application and development of the adversarial examples.

Key words: deep neural network, adversarial example, white-box attack, black-box attack, robustness



关键词: 深度神经网络, 对抗样本, 白盒攻击, 黑盒攻击, 鲁棒性