Computer Engineering and Applications ›› 2021, Vol. 57 ›› Issue (15): 163-170.DOI: 10.3778/j.issn.1002-8331.2003-0137

Previous Articles     Next Articles

Semantic Adversarial Examples Generation Method for Color Model Disturbances

WANG Shuya, LIU Qiangchun, CHEN Yunfang, WANG Fujun   

  1. 1.College of Tongda, Nanjing University of Posts and Telecommunications, Yangzhou, Jiangsu 225127, China
    2.College of Computer, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
    3.College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
  • Online:2021-08-01 Published:2021-07-26



  1. 1.南京邮电大学 通达学院,江苏 扬州 225127
    2.南京邮电大学 计算机学院,南京 210023
    3.南京航空航天大学 计算机科学与技术学院,南京 211106


Convolutional neural network is a deep neural network with powerful feature extraction capabilities, it has been widely used in many fields. However, recent research shows that convolutional neural networks are vulnerable to adversarial attacks. Different from the traditional method of iteratively generating anti-perturbation by gradient, this paper proposes a color model-based method for generating semantic adversarial samples, which uses the shape preference characteristics of human vision and convolution model in object recognition, and generates the anti-disturbance sample by disturbing the color channel based on the transformation of color model. In the process of sample generation, it does not need the network parameters, loss function or related structure information of the target model, but only relies on the transformation of the color model and random disturbance of channel information. Therefore, it is a counter sample that can complete the black box attack.

Key words: adversarial examples, convolutional neural network, semantic feature, black-box attack



关键词: 对抗样本, 卷积神经网络, 语义信息, 黑盒攻击