计算机工程与应用 ›› 2024, Vol. 60 ›› Issue (10): 61-75.DOI: 10.3778/j.issn.1002-8331.2310-0362
蔡伟,狄星雨,蒋昕昊,王鑫,高蔚洁
出版日期:
2024-05-15
发布日期:
2024-05-15
CAI Wei, DI Xingyu, JIANG Xinhao, WANG Xin, GAO Weijie
Online:
2024-05-15
Published:
2024-05-15
摘要: 深度学习模型容易受到对抗样本的影响,在图像上添加肉眼不可见的微小扰动就可以使训练有素的深度学习模型失灵。最近的研究表明这种扰动也存在于现实世界中。聚焦于深度学习目标检测模型的物理对抗攻击,明确了物理对抗攻击的概念,并介绍了目标检测物理对抗攻击的一般流程,依据攻击任务的不同,从车辆检测和行人检测两个方面综述了近年来一系列针对目标检测网络的物理对抗攻击方法,简单介绍了其他针对目标检测模型的攻击、其他攻击任务和其他攻击方式。最后讨论了物理对抗攻击当前面临的挑战,引出了对抗训练的局限性,并展望了未来可能的发展方向和应用前景。
蔡伟, 狄星雨, 蒋昕昊, 王鑫, 高蔚洁. 针对目标检测模型的物理对抗攻击综述[J]. 计算机工程与应用, 2024, 60(10): 61-75.
CAI Wei, DI Xingyu, JIANG Xinhao, WANG Xin, GAO Weijie. Survey of Physical Adversarial Attacks Against Object Detection Models[J]. Computer Engineering and Applications, 2024, 60(10): 61-75.
[1] SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[J]. arXiv:1312.6199, 2013. [2] GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples[J]. arXiv:1412.6572, 2014. [3] KURAKIN A, GOODFELLOW I, BENGIO S. Adversarial examples in the physical world[J]. arXiv:1607.02533, 2016. [4] ZENG X, LIU C, WANG Y S, et al. Adversarial attacks beyond the image space[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 4302-4311. [5] CHEN S T, CORNELIUS C, MARTIN J, et al. ShapeShifter: robust physical adversarial attack on faster R-CNN object detector[C]//Proceedings of the 2018 European Conference on Machine Learning and Knowledge Discovery in Databases, Dublin, Sep 10-14, 2018: 52-68. [6] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014: 580-587. [7] GIRSHICK R. Fast R-CNN[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision, 2015: 1440-1448. [8] REN S, HE K, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. [9] CAI Z, VASCONCELOS N. Cascade R-CNN: delving into high quality object detection[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, 2018: 6154-6162. [10] HE K, GKIOXARI G, DOLLáR P, et al. Mask R-CNN[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision, 2017: 2961-2969. [11] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016: 779-788. [12] LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multibox detector[C]//Proceedings of the 14th European Conference on Computer Vision, Amsterdam, Oct 11-14, 2016: 21-37. [13] FANG Q, LI H, LUO X, et al. Detecting non-hardhat-use by a deep learning method from far-field surveillance videos[J]. Automation in Construction, 2018, 85: 1-9. [14] ZHOU X, XU X, LIANG W, et al. Deep-learning-enhanced multitarget detection for end-edge-cloud surveillance in smart IoT[J]. IEEE Internet of Things Journal, 2021, 8(16): 12588-12596. [15] SARDA A, DIXIT S, BHAN A. Object detection for autonomous driving using YOLO [you only look once] algorithm[C]//Proceedings of the 2021 3rd International Conference on Intelligent Communication Technologies and Virtual Mobile Networks, 2021: 1370-1374. [16] WANG H, ZHU B, LI Y, et al. SYGNet: a SVD-YOLO based ghostnet for real-time driving scene parsing[C]//Proceedings of the 2022 IEEE International Conference on Image Processing, 2022: 2701-2705. [17] LIU M, WANG X, ZHOU A, et al. UAV-YOLO: small object detection on unmanned aerial vehicle perspective[J]. Sensors, 2020, 20(8): 2238. [18] YU J, ZHANG W. Face mask wearing detection algorithm based on improved YOLO-v4[J]. Sensors, 2021, 21(9): 3263. [19] LIU Y, CHEN X, LIU C, et al. Delving into transferable adversarial examples and black-box attacks[J]. arXiv:1611. 02770, 2016. [20] WANG D, YAO W, JIANG T, et al. A survey on physical adversarial attack in computer vision[J]. arXiv:2209.14262, 2022. [21] REN K, ZHENG T, QIN Z, et al. Adversarial attacks and defenses in deep learning[J]. Engineering, 2020, 6(3): 346-360. [22] CHOI Y, PARK J, LEE J, et al. Exploring diverse feature extractions for adversarial audio detection[J]. IEEE Access, 2023, 11: 2351-2360. [23] DU X, PUN C M. Robust audio patch attacks using physical sample simulation and adversarial patch noise generation[J]. IEEE Transactions on Multimedia, 2021, 24: 4381-4393. [24] BIOLKOVá M, NGUYEN B. Neural predictor for black-box adversarial attacks on speech recognition[J]. arXiv: 2203.09849, 2022. [25] CHEN K, ZHU H, YAN L, et al. A survey on adversarial examples in deep learning[J]. Journal on Big Data, 2020, 2(2): 71. [26] ZHANG W E, SHENG Q Z, ALHAZMI A, et al. Adversarial attacks on deep-learning models in natural language processing: a survey[J]. ACM Transactions on Intelligent Systems and Technology, 2020, 11(3): 1-41. [27] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[J]. arXiv:1706.06083, 2017. [28] CARLINI N, WAGNER D. Towards evaluating the robustness of neural networks[C]//Proceedings of the 2017 IEEE Symposium on Security and Privacy, 2017: 39-57. [29] MOOSAVI-DEZFOOLI S M, FAWZI A, FROSSARD P. DeepFool: a simple and accurate method to fool deep neural networks[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016: 2574-2582. [30] PAPERNOT N, MCDANIEL P, JHA S, et al. The limitations of deep learning in adversarial settings[C]//Proceedings of the 2016 IEEE European Symposium on Security and Privacy, 2016: 372-387. [31] SU J, VARGAS D V, SAKURAI K. One pixel attack for fooling deep neural networks[J]. IEEE Transactions on Evolutionary Computation, 2019, 23(5): 828-841. [32] WEI X, PU B, LU J, et al. Physically adversarial attacks and defenses in computer vision: a survey[J]. arXiv:2211. 01671, 2022. [33] ZHANG Y, GONG Z, ZHANG Y, et al. Boosting transferability of physical attack against detectors by redistributing separable attention[J]. Pattern Recognition, 2023, 138: 109435. [34] WU Z, LIM S N, DAVIS L S, et al. Making an invisibility cloak: real world adversarial attacks on object detectors[C]//Proceedings of the 16th European Conference on Computer Vision, Glasgow, Aug 23-28, 2020: 1-17. [35] MAHENDRAN A, VEDALDI A. Understanding deep image representations by inverting them[C]//Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, 2015: 5188-5196. [36] BRAUNEGG A, CHAKRABORTY A, KRUMDICK M, et al. APRICOT: a dataset of physical adversarial attacks on object detection[C]//Proceedings of the 16th European Conference on Computer Vision, Glasgow, Aug 23-28, 2020: 35-50. [37] JIANG W, ZHANG T, LIU S, et al. Exploring the physical-world adversarial robustness of vehicle detection[J]. Electronics, 2023, 12(18): 3921. [38] PRIYA M V, PANKAJ D S. 3DYOLO: real-time 3D object detection in 3D point clouds for autonomous driving[C]//Proceedings of the 2021 IEEE International India Geoscience and Remote Sensing Symposium, 2021: 41-44. [39] JIAO L, ZHANG F, LIU F, et al. A survey of deep learning-based object detection[J]. IEEE Access, 2019, 7: 128837-128868. [40] EYKHOLT K, EVTIMOV I, FERNANDES E, et al. Robust physical-world attacks on deep learning visual classification[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, 2018: 1625-1634. [41] ZOLFI A, KRAVCHIK M, ELOVICI Y, et al. The translucent patch: a physical and universal attack on object detectors[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 15232-15241. [42] ZHANG Y, FOROOSH H, DAVID P, et al. CAMOU: learning physical vehicle camouflages to adversarially attack detectors in the wild[C]//Proceedings of the 7th International Conference on Learning Representations, 2018. [43] DUAN Y, CHEN J, ZHOU X, et al. Learning coated adversarial camouflages for object detectors[C]//Proceedings of the 31st International Joint Conference on Artificial Intelligence, 2022: 891-897. [44] WU T, NING X, LI W, et al. Physical adversarial attack on vehicle detector in the Carla simulator[J]. arXiv:2007.16118, 2020. [45] WANG D, JIANG T, SUN J, et al. FCA: learning a 3D full-coverage vehicle camouflage for multi-view physical adversarial attack[C]//Proceedings of the 36th AAAI Conference on Artificial Intelligence, 2022: 2414-2422. [46] WANG J, LIU A, YIN Z, et al. Dual attention suppression attack: generate adversarial camouflage in physical world[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 8565-8574. [47] SURYANTO N, KIM Y, KANG H, et al. DTA: physical camouflage attacks using differentiable transformation network[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 15305-15314. [48] SURYANTO N, KIM Y, LARASATI H T, et al. ACTIVE: towards highly transferable 3D physical camouflage for universal and robust vehicle evasion[C]//Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision, 2023: 4305-4314. [49] ATHALYE A, ENGSTROM L, ILYAS A, et al. Synthesizing robust adversarial examples[C]//Proceedings of the 35th International Conference on Machine Learning, 2018: 284-293. [50] WU W, SU Y, CHEN X, et al. Boosting the transferability of adversarial samples via attention[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 1161-1170. [51] WEI H, TANG H, JIA X, et al. Physical adversarial attack meets computer vision: a decade survey[J]. arXiv:2209. 15179, 2022. [52] HUANG L, GAO C, ZHOU Y, et al. Universal physical camouflage attacks on object detectors[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 720-729. [53] LU J, SIBAI H, FABRY E. Adversarial examples that fool detectors[J]. arXiv:1712.02494, 2017. [54] BOLOOR A, GARIMELLA K, HE X, et al. Attacking vision-based perception in end-to-end autonomous driving models[J]. Journal of Systems Architecture, 2020, 110: 101766. [55] SATO T, SHEN J, WANG N, et al. Dirty road can attack: security of deep learning based automated lane centering under physical-world attack[C]//Proceedings of the 30th USENIX Security Symposium, 2021: 3309-3326. [56] PATEL N, KRISHNAMURTHY P, GARG S, et al. Bait and switch: online training data poisoning of autonomous driving systems[J]. arXiv:2011.04065, 2020. [57] YANG K, TSAI T, YU H, et al. Beyond digital domain: fooling deep learning based recognition system in physical world[C]//Proceedings of the 34th AAAI Conference on Artificial Intelligence, 2020: 1088-1095. [58] LOVISOTTO G, TURNER H, SLUGANOVIC I, et al. SLAP: improving physical adversarial examples with short-lived adversarial perturbations[C]//Proceedings of the 30th USENIX Security Symposium, 2021: 1865-1882. [59] WEN H, CHANG S, ZHOU L. Light projection-based physical-world vanishing attack against car detection[C]//Proceedings of the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing, 2023: 1-5. [60] HOORY S, SHAPIRA T, SHABTAI A, et al. Dynamic adversarial patch for evading object detection models[J]. arXiv:2010.13070, 2020. [61] TOHEED A, YOUSAF M H, RABNAWAZ, et al. Physical adversarial attack scheme on object detectors using 3D adversarial object[C]//Proceedings of the 2022 2nd International Conference on Digital Futures and Transformative Technologies, 2022: 1-4. [62] THYS S, VAN RANST W, GOEDEMé T. Fooling automated surveillance cameras: adversarial patches to attack person detection[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019. [63] WANG Y, LV H, KUANG X, et al. Towards a physical-world adversarial patch for blinding object detection models[J]. Information Sciences, 2021, 556: 459-471. [64] YANG D Y, XIONG J, LI X, et al. Building towards “invisible cloak”: robust physical adversarial attack on YOLO object detector[C]//Proceedings of the 2018 9th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference, 2018: 368-374. [65] GUESMI A, DING R, HANIF M A, et al. DAP: a dynamic adversarial patch for evading person detectors[J]. arXiv:2305.11618, 2023. [66] GUESMI A, BILASCO I M, SHAFIQUE M, et al. AdvART: adversarial art for camouflaged object detection attacks[J]. arXiv:2303.01734, 2023. [67] LAPID R, SIPPER M. Patch of invisibility: naturalistic black-box adversarial attacks on object detectors[J]. arXiv:2303. 04238, 2023. [68] ZHU X, LI X, LI J, et al. Fooling thermal infrared pedestrian detectors in real world using small bulbs[C]//Proceedings of the 35th AAAI Conference on Artificial Intelligence, 2021: 3616-3624. [69] XU K, ZHANG G, LIU S, et al. Adversarial T-shirt! Evading person detectors in a physical world[C]//Proceedings of the 16th European Conference on Computer Vision, Glasgow, Aug 23-28, 2020: 665-681. [70] HU Y C T, KUNG B H, TAN D S, et al. Naturalistic physical adversarial patch for object detectors[C]//Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, 2021: 7848-7857. [71] TAN J, JI N, XIE H, et al. Legitimate adversarial patches: evading human eyes and detection models in the physical world[C]//Proceedings of the 29th ACM International Conference on Multimedia, 2021: 5307-5315. [72] HU Z, HUANG S, ZHU X, et al. Adversarial texture for fooling person detectors in the physical world[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 13307-13316. [73] HU Z, CHU W, ZHU X, et al. Physically realizable natural-looking clothing textures evade person detectors via 3D modeling[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 16975-16984. [74] ZHU X, HU Z, HUANG S, et al. Infrared invisible clothing: hiding from infrared detectors at multiple angles in real world[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 13317-13326. [75] WEI X, YU J, HUANG Y. Physically adversarial infrared patches with learnable shapes and locations[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 12334-12342. [76] WEI H, WANG Z, JIA X, et al. Hotcold block: fooling thermal infrared detectors with a novel wearable design[C]//Proceedings of the 37th AAAI Conference on Artificial Intelligence, 2023: 15233-15241. [77] HU C, SHI W, JIANG T, et al. Adversarial infrared blocks: a multi-view black-box attack to thermal infrared detectors in physical world[J]. arXiv:2304.10712, 2023. [78] WANG X, CAI M, SOHEL F, et al. Adversarial point cloud perturbations against 3D object detection in autonomous driving systems[J]. Neurocomputing, 2021, 466: 27-36. [79] YANG K, TSAI T, YU H, et al. Robust roadside physical adversarial attack against deep learning in lidar perception modules[C]//Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security, 2021: 349-362. [80] DU A, CHEN B, CHIN T J, et al. Physical adversarial attacks on an aerial imagery object detector[C]//Proceedings of the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision, 2022: 1796-1806. [81] LIAN J, WANG X, SU Y, et al. CBA: contextual background attack against optical aerial detection in the physical world[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 1-16. [82] LIAN J, WANG X, SU Y, et al. Contextual adversarial attack against aerial detection in the physical world[J]. arXiv:2302.13487, 2023. [83] CZAJA W, FENDLEY N, PEKALA M, et al. Adversarial examples in remote sensing[C]//Proceedings of the 26th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, 2018: 408-411. [84] CHEN L, LI H, ZHU G, et al. Attack selectivity of adversarial examples in remote sensing image scene classification[J]. IEEE Access, 2020, 8: 137477-137489. [85] LIN C Y, CHEN F J, NG H F, et al. Invisible adversarial attacks on deep learning-based face recognition models[J]. IEEE Access, 2023, 11: 51567-51577. [86] ZHONG Y, LIU X, ZHAI D, et al. Shadows can be dangerous: stealthy and effective physical-world adversarial attack by natural phenomenon[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 15345-15354. [87] DUAN R, MA X, WANG Y, et al. Adversarial camouflage: hiding physical-world attacks with natural styles[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 1000-1008. [88] SHARIF M, BHAGAVATULA S, BAUER L, et al. Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition[C]//Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016: 1528-1540. [89] RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation[C]//Proceedings of the 8th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Oct 5-9, 2015: 234-241. [90] WANG Z, WANG B, ZHANG C, et al. Defense against adversarial patch attacks for aerial image semantic segmentation by robust feature extraction[J]. Remote Sensing, 2023, 15(6): 1690. [91] ARNAB A, MIKSIK O, TORR P H S. On the robustness of semantic segmentation models to adversarial attacks[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, 2018: 888-897. [92] CHAN S H, DONG Y, ZHU J, et al. BadDet: backdoor attacks on object detection[C]//Proceedings of the 17th European Conference on Computer Vision. Cham: Springer, 2022: 396-412. [93] LI Y, ZHAI T, JIANG Y, et al. Backdoor attack in the physical world[J]. arXiv:2104.02361, 2021. [94] QIAN Y, JI B, HE S, et al. Robust backdoor attacks on object detection in real world[J]. arXiv:2309.08953, 2023. |
[1] | 王彩玲, 闫晶晶, 张智栋. 基于多模态数据的人体行为识别方法研究综述[J]. 计算机工程与应用, 2024, 60(9): 1-18. |
[2] | 廉露, 田启川, 谭润, 张晓行. 基于神经网络的图像风格迁移研究进展[J]. 计算机工程与应用, 2024, 60(9): 30-47. |
[3] | 杨晨曦, 庄旭菲, 陈俊楠, 李衡. 基于深度学习的公交行驶轨迹预测研究综述[J]. 计算机工程与应用, 2024, 60(9): 65-78. |
[4] | 宋建平, 王毅, 孙开伟, 刘期烈. 结合双曲图注意力网络与标签信息的短文本分类方法[J]. 计算机工程与应用, 2024, 60(9): 188-195. |
[5] | 车运龙, 袁亮, 孙丽慧. 基于强语义关键点采样的三维目标检测方法[J]. 计算机工程与应用, 2024, 60(9): 254-260. |
[6] | 邱云飞, 王宜帆. 双分支结构的多层级三维点云补全[J]. 计算机工程与应用, 2024, 60(9): 272-282. |
[7] | 叶彬, 朱兴帅, 姚康, 丁上上, 付威威. 面向桌面交互场景的双目深度测量方法[J]. 计算机工程与应用, 2024, 60(9): 283-291. |
[8] | 王正来, 关胜晓. 基于改进积分梯度的黑盒迁移攻击算法[J]. 计算机工程与应用, 2024, 60(9): 309-316. |
[9] | 周定威, 扈静, 张良锐, 段飞亚. 面向目标检测的数据集标签遗漏的协同修正技术[J]. 计算机工程与应用, 2024, 60(8): 267-273. |
[10] | 周伯俊, 陈峙宇. 基于深度元学习的小样本图像分类研究综述[J]. 计算机工程与应用, 2024, 60(8): 1-15. |
[11] | 孙石磊, 李明, 刘静, 马金刚, 陈天真. 深度学习在糖尿病视网膜病变分类领域的研究进展[J]. 计算机工程与应用, 2024, 60(8): 16-30. |
[12] | 汪维泰, 王晓强, 李雷孝, 陶乙豪, 林浩. 时空图神经网络在交通流预测研究中的构建与应用综述[J]. 计算机工程与应用, 2024, 60(8): 31-45. |
[13] | 谢威宇, 张强. 基于深度学习的图像中无人机与飞鸟检测研究综述[J]. 计算机工程与应用, 2024, 60(8): 46-55. |
[14] | 常禧龙, 梁琨, 李文涛. 深度学习优化器进展综述[J]. 计算机工程与应用, 2024, 60(7): 1-12. |
[15] | 周钰童, 马志强, 许璧麒, 贾文超, 吕凯, 刘佳. 基于深度学习的对话情绪生成研究综述[J]. 计算机工程与应用, 2024, 60(7): 13-25. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||