Computer Engineering and Applications ›› 2023, Vol. 59 ›› Issue (2): 261-270.DOI: 10.3778/j.issn.1002-8331.2206-0184

• Network, Communication and Security • Previous Articles     Next Articles

Adversarial Attacks for Object Detection Based on Region of Interest of Feature Maps

WANG Yekui, CAO Tieyong, ZHENG Yunfei, FANG Zheng, WANG Yang , LIU Yajiu, FU Bingyang, CHEN Lei   

  1. 1.College of Command & Control Engineering, Army Engineering University of PLA, Nanjing 210007, China
    2.Unit 31401 of PLA, China
    3.The Army Artillery and Defense Academy of PLA, Nanjing 211100, China
    4.Anhui Key Laboratory of Polarization Imaging Detection Technology, Hefei 230031, China
  • Online:2023-01-15 Published:2023-01-15

基于特征图关注区域的目标检测对抗攻击方法

王烨奎,曹铁勇,郑云飞,方正,王杨,刘亚九,付炳阳,陈雷   

  1. 1.陆军工程大学 指挥控制工程学院,南京 210007
    2.中国人民解放军31401部队
    3.陆军炮兵防空兵学院,南京 211100
    4.安徽省偏振成像与探测重点实验室,合肥 230031

Abstract: Object detection is widely used in the fields of unmanned driving, monitoring and security. However, it is found that the object detection system is vulnerable to the impact of adversarial examples, resulting in performance degradation, which poses a great danger to its application safety. Most of the adversarial examples for object detection are only designed for a certain type of object detection model, and their transferability is weakly. In order to solve the above problem, based on the generative adversarial networks, an adversarial examples method for object detection is proposed. In this method, a position regression attack loss is designed for the non-maximum suppression mechanism, which is commonly used in the detection model and the key regions predicted by the detection model. Through this loss, the non-maximum suppression mechanism of the model is invalid, and the generated region proposals are guided to deviate from the predicted key regions, resulting in the failure of the model prediction. The experimental results on VOC dataset show that the proposed method can effectively attack object detection models, such as Faster-RCNN, SSD300, SSD512, RetinaNet, YOLOv5, One-Net, etc., which improve the transferability of the adversarial examples for object detection.

Key words: object detection, adversarial attack, generative adversarial network(GAN), transferability, non-maximum suppression, region of interest

摘要: 目标检测在无人驾驶、监控安防等领域应用广泛,但研究发现目标检测系统易受对抗样本影响导致性能下降,对其应用安全造成了巨大危险。当前的目标检测对抗攻击方法大多针对某一类目标检测模型进行攻击,普遍存在迁移能力弱的问题。为解决上述问题,基于生成对抗网络提出了一种目标检测对抗攻击方法,该方法针对检测模型中常用的非极大值抑制机制和检测模型的特征图关注区域设计了位置回归攻击损失,通过该损失优化攻击,能够使模型的非极大值抑制机制失效,引导生成的候选框偏离预测的关注区域,导致模型预测失败。在VOC数据集上进行实验,该方法能够有效攻击Faster-RCNN、SSD300、SSD512、Retinanet、YOLOv5、One-Net等多种类型的目标检测模型,有效提升了目标检测攻击方法的迁移能力。

关键词: 目标检测, 对抗攻击, 生成对抗网络, 迁移性, 非极大值抑制, 关注区域