计算机工程与应用 ›› 2023, Vol. 59 ›› Issue (13): 61-73.DOI: 10.3778/j.issn.1002-8331.2210-0470
魏健,宋小庆,王钦钊
出版日期:
2023-07-01
发布日期:
2023-07-01
WEI Jian, SONG Xiaoqing, WANG Qinzhao
Online:
2023-07-01
Published:
2023-07-01
摘要: 智能识别对抗算法是深度学习领域一个全新的研究方向,获得越来越多的关注。介绍针对目标识别技术的黑盒攻击智能识别对抗算法的工作流程和主要环节,从算法原理、代价函数、攻击性能和应用场景等方面进行综述:分析开展黑盒攻击对训练数据和模型的条件需求及运用策略,归纳基于数据和基于代理模型开展智能识别对抗算法的原理及优缺点;从提高攻击有效性、增强攻击泛化性、降低模型迭代次数和拓展对抗样本应用场景角度,剖析基于代理模型的智能识别对抗算法研究进展,即多样化代价函数、集成训练模型、优化参数更新空间、改进参数更新策略等手段在对抗样本生成过程中的作用;以攻击人脸识别系统、自动驾驶系统和追踪系统为典型应用场景,梳理算法现实应用情况;以军事应用为背景,探讨开展黑盒攻击智能识别对抗算法研究面临的困难挑战及解决方案。
魏健, 宋小庆, 王钦钊. 黑盒攻击智能识别对抗算法研究现状[J]. 计算机工程与应用, 2023, 59(13): 61-73.
WEI Jian, SONG Xiaoqing, WANG Qinzhao. Research Status of Black-Box Intelligent Adversarial Attack Algorithms[J]. Computer Engineering and Applications, 2023, 59(13): 61-73.
[1] SZEGEDY C,ZAREMBA W,SUTSKEVER I,et al.Intriguing properties of neural networks[J].arXiv:1312.6199,2013. [2] PAPERNOT N,MCDANIEL P,GOODFELLOW I,et al.Practical black-box attacks against machine learning[C]//Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security,2017:506-519. [3] IAN J G,JONATHON S,CHRISTIAN S.Explaining and harnessing adversarial examples[J].arXiv:1412.6572,2014. [4] MOOSAVI-DEZFOOLI S,FAWZI A,FROSSARD P.DeepFool:a simple and accurate method to fool deep neural networks[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR),2016:2574-2582. [5] CARLINI N,WAGNER D.Towards evaluating the robustness of neural networks[C]//Proceedings of 2017 IEEE Symposium on Security and Privacy,2017:39-57. [6] EYHOLT K,EVTIMOV I,FERNANDES E,et al.Robust physicalworld attacks on deep learning visual classification[C]//Proceedings of CVPR 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition,2018:1625-1634. [7] SHARIF M,BHAGAVATULA S,BAUER L,et al.Accessorize to a crime:real and stealthy attacks on state-of-the-art face recognition[C]//Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security,2016:1528-1540. [8] DUAN R,MA X,WANG Y,et al.Adversarial camouflage:hiding physical-world attacks with natural styles[C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2020:997-1005. [9] DONG Y,SU H,WU B,et al.Efficient decision-based black-box adversarial attacks on face recognition[C]//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2019:7706-7714. [10] ZHONG Y,LIU X,ZHAI D,et al.Shadows can be dangerous:stealthy and effective physical-world adversarial attack by natural phenomenon[C]//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2022:15324-15333. [11] KONG Z,GUO J,LI A,et al.PhysGAN generating physical world resilient adversarial examples for autonomous driving[C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2020:14254-14263. [12] MOHAMMAD A A K J,AKHTAR N,BENAMOUN M,et al.Attack to explain deep representation[C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2020:9540-9549. [13] WANG C,WANG J,LIN Q.Adversarial attacks and defenses in deep learning:a survey[C]//Proceedings of the 17 International Conference on Intelligent Computing(ICIC),2021:450-461. [14] 蔡秀霞,杜慧敏.对抗攻击及对抗样本生成方法综述[J].西安邮电大学学报,2021,26(1):67-75. CAI X X,DU H M.Survey on adversarial example generation and adversarial attack method[J].Journal of XI'AN University of Posts and Telecommunications,2021,26(1):67-75. [15] 潘文雯,王新宇,宋明黎,等.对抗样本生成技术综述[J].软件学报,2020,31(1):67-81. PAN W W,WANG X Y,SONG M L,et al.Survey on generating adversarial examples[J].Journal of Software,2020,31(1):67-81. [16] 白祉旭,王衡军,郭可翔.基于深度神经网络的对抗样本技术综述[J].计算机工程与应用,2021,57(23):61-70. BAI Z X,WANG H J,GUO K X.Summary of adversarial examples techniques based on deep neural networks[J].Computer Engineering and Applications,2021,57(23):61-70. [17] 王树伟,周刚,巨星海,等.基于生成对抗网络的恶意软件对抗样本生成综述[J].信息工程大学学报,2019,20(5):616-621. WANG S W,ZHOU G,JU X H,et al.Review of malware adversarial sample generation on generative adversarial networks[J].Journal of Information Engineering University,2019,20(5):616-621., [18] 叶启松,戴旭初.攻击分类器的对抗样本生成技术的现状分析[J].计算机工程与应用,2020,56(5):34-42. YE Q S,DAI X C.State of the art on adversarial example generation methods for attacking classifier[J].Computer Engineering and Applications,2020,56(5):34-42. [19] PAPERNOT N,MCDANIEL PGOODFELLOW I,et al.Practical black-box attacks against machine learning[J].arXiv:1602.02697,2016. [20] WANG B,GONG N Z.Stealing hyperparameters in machine learning[C]//Proceedings of 2018 IEEE Symposium on Security and Privacy,2018:36-52. [21] ZHU Y,CHENG Y,ZHOU H,et al.Hermes attack:steal DNN models with lossless inference accuracy[J].arXiv:2006.12784,2020. [22] TRAMER F,ZHANG F,JUELS A,et al.Stealing machine learning models via prediction APIs[J].arXiv:1609.02943,2016. [23] OH S J,AUGUSTIN M,SCHIELE B,et al.Whitening black-box neural networks[J].arXiv:1711.01768,2017. [24] ORENONDY T,SCHIELE B,FRITZ M.Knockoff nets:stealing functionality of BlackBox models[C]//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2019:4954-4963. [25] STEINHARDT J,KOH P W,LIANG P.Certified defenses for data poisoning attacks[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems,2017:3520-3532. [26] ZHU C,HUANG W R,SHAFAHI A,et al.Transferable clean-label poisoning attacks on deep neural nets[J].arXiv:1905.05897,2019. [27] SHAFAHI A,HUANG W R,NAJIBI M,et al.Poison frogs! targeted clean-label poisoning attacks on neural networks[C]//Proceedings of the 32nd International Conference on Neural Information Processing Systems,2018:6106-6116. [28] XIAO H,BIGGIO B,NELSON B,et al.Support vector machines under adversarial label contamination[J].Neurocomputing,2015,160:53-62. [29] WENGER E,PASSANANTI J,BHAGOJI A N,et al.Backdoor attacks against deep learning systems in the physical world[C]//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2021:6206-6215. [30] CHEN X,LIU C,LI B,et al.Targeted backdoor attacks on deep learning systems using data poisoning[J].arXiv:1712.05526,2017. [31] KOH P W,LIANG P.Understanding black-box predictions via influence functions[J].arXiv:1703.04730,2017. [32] JI Y,ZHANG X,WANG T.Backdoor attacks against learning systems[C]//Proceedings of 2017 IEEE Conference on Communications and Network Security(CNS),2017:1-9. [33] ZHAO S,MA X,ZHENG X,et al.Clean-label backdoor attacks on video recognition models[C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2020:14431-14440. [34] YIN H,MOLCHANOV P,ALVAREZ J M,et al.Dreaming to distill datafree knowledge transfer via deepinversion[C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition,2020. [35] ZHOU M,WU J,LIU Y,et al.DaST:datafree substitute training for adversarial attacks[C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2020:231-240. [36] WANG W,YIN B,YAO T,et al.Delving into data:effectively substitute training for black-box attack[C]// Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2021:4759-4768. [37] ZHANG C,BENZ P,IMITAZ T.Understanding adversarial examples from the mutual influence of images and perturbations[C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2020:14509-14518. [38] OH S J,AUGUSTIN M,SCHIELE B,et al.Towards reverse-engineering black-box neural networks[J].arXiv:1711.01768,2017. [39] DONG Y,LIAO F,PANG T,et al.Boosting adversarial attacks with momentum[C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition,2018:9185-9193. [40] BRENDEL W,RAUBER J,BETHGE M.Decision-based adversarial attacks:reliable attacks against black-box machine learning models[J].arXiv:1712.04248,2017. [41] CHEN J,JORDAN M I.HopSkipJumpAttack:a query-efficient decision-based attack[C]//Proceedings of 2020 IEEE Symposium on Security and Privacy(SP),2020:1277-1294. [42] WANG D,LIN J,WANG Y.Query-efficient adversarial attack based on latin hypercube sampling[J].arXiv:2207.02391,2022. [43] LIU Y,DEZFOOLI S M M,FROSSARD P.A geometry-inspired decision-based attack[C]//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision,2019:4889-4897. [44] RAHMATI A,DEZFOOLI S M M,FROSSARD P,et al.GeoDA:a geometric framework for black box adversarial attacks[C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2020:8443-8452. [45] LI H,XU X,ZHANG X,et al.QEBA:query efficient boundary based blackbox attack[C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2020:1221-1230. [46] LI J,JI R,LIU H,et al.Projection & probability-driven blackbox attack[C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2020:359-368. [47] LIU Y,CHEN X,LIU C,et al.Delving into transferable adversarial examples and black-box attacks[J].arXiv:1611.02770,2016. [48] KURAKIN A,GOODFELLOW I J,BENGIO S,et al.Adversarial attacks and defences competition[J].arXiv:1804.00097,2018. [49] LI Y,BAI S,ZHOU Y,et al.Learning transferable adversarial examples via ghost networks[J].arXiv:1812. 03413,2018. [50] WU D,WANG Y,XIA S,et al.Skip connections matter:on the transferability of adversarial examples generated with ResNets[J].arXiv:2002.05990,2020. [51] WU L,ZHU Z,TAI C,et al.Understanding and enhancing the transferability of adversarial examples[J].arXiv:1802. 09707,2018. [52] FENG X,YAO H,CHE W,et al.An effective way to boost black-box adversarial attack[C]//Proceedings of 26th International Conference on Multimedia Modeling,2020:393-404. [53] SHI U,WANG S,HAN Y.Curls & Whey boosting blackbox adversarial attacks[C]//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2019:6512-6520. [54] XIE C,ZHANG Z,ZHOU Y,et al.Improving transferability of adversarial examples with input diversity[C]//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2019:2725-2734. [55] WANG X,HE K.Enhancing the transferability of adversarial attacks through variance tuning[C]//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2021:1924-1933. [56] LI M,DENG C,LI T,et al.Towards transferable targeted attack[C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2020:638-646. [57] YOSINSKI J,CLUNE J,BENGIO Y,et al.How transferable are features in deep neural networks?[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems,2014. [58] INKAWHICH N,WEN W,LI H,et al.Feature space perturbations yield more transferable adversarial examples[C]//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2019:7066-7074. [59] CHENG M,LE T,CHEN P,et al.Query-efficient hard-label black-box attack:an optimization-based approach[J].arXiv:1807.04457,2018. [60] DONG Y,SU H,WU B,et al.Efficient decision based blackbox adversarial attacks on face recognition[C]//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2019:7714-7723. [61] CHEN Z,LIN P,JIANG Z L,et al.An illumination modulation-based adversarial attack against automated face recognition system[C]//Proceedings of 16th International Conference on Information Security and Cryptology,2020:53-69. [62] LYKO E,KEDZIORA M.Adversarial attacks on face detection algorithms using anti?facial recognition T-shirts[C]//Proceedings of IEEE International Conference on Computer Communication and the Internet,2021:266-277. [63] FERNANDES S L,JHA S K.Adversarial attack on deepfake detection using RL based texture patches[C]//Proceedings of European Conference on Computer Vision,2020:220-235. [64] ZHU Y,JIANG Y.Imperceptible adversarial attacks against traffic scene recognition[J].Soft Computing,2021,25:13069-13077. [65] ZHAO C,LI H.Condition-invariant physical adversarial attacks via pixel-wise adversarial learning[C]//Proceedings of the 28th International Conference on Neural Information Processing,2021:369-380. [66] WANG J,LIU A,YIN Z,et al.Dual attention suppression attack:generate adversarial camouflage in physical world[C]//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2021:8565-8574. [67] EYKHOLT K,EVTIMOV I,FERNADES E,et al.Physical adversarial examples for object detectors[J].arXiv:1807.07769,2018. [68] WANG H,WANG G,LI Y,et al.Transferable,Controllable,and inconspicuous adversarial attacks on person re-identification with deep mis-ranking[C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2020:342-351. [69] WANG Z,ZHENG S,SONG M,et al.AdvPattern:physical-world attacks on deep person re-identification via adversarially transformable patterns[C]//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision(ICCV),2019:8341-8350. [70] TU J,REN M,MANINASAGAM S,et al.Physically realizable adversarial examples for LiDAR object detection[C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2020:13716-13725. [71] WIYATNO R R,XU A.Physical adversarial textures that fool visual object tracking[C]//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision(ICCV),2019:4822-4831. |
[1] | 郑玉彤, 孙昊英, 宋伟. 隐空间转换的混合样本图像去雾[J]. 计算机工程与应用, 2023, 59(9): 225-236. |
[2] | 云飞, 殷雁君, 张文轩, 智敏. 融合注意力机制的对抗式半监督语义分割[J]. 计算机工程与应用, 2023, 59(8): 254-262. |
[3] | 魏玮, 张鑫, 朱叶. 基于双重注意力和光流估计的人脸替换方法[J]. 计算机工程与应用, 2023, 59(7): 143-151. |
[4] | 王小明, 毛语实, 徐斌, 王子磊. 内容结构保持的图像风格迁移方法[J]. 计算机工程与应用, 2023, 59(6): 146-154. |
[5] | 陈景霞, 唐喆喆, 林文涛, 胡凯蕾, 谢佳. 用于脑电数据增强和情绪识别的自注意力GAN[J]. 计算机工程与应用, 2023, 59(5): 160-168. |
[6] | 梁鸿, 陈秋实, 邵明文. 基于属性分解融合的可控人脸图像合成算法[J]. 计算机工程与应用, 2023, 59(4): 208-215. |
[7] | 梁晓, 王妮婷, 王静雯, 欧阳娇. 基于生成对抗模型及光路分解的全局光照绘制[J]. 计算机工程与应用, 2023, 59(4): 243-251. |
[8] | 张海燕, 张富凯, 袁冠, 李莹莹. 多姿态图像生成的行人重识别算法研究[J]. 计算机工程与应用, 2023, 59(2): 143-152. |
[9] | 王烨奎, 曹铁勇, 郑云飞, 方正, 王杨, 刘亚九, 付炳阳, 陈雷. 基于特征图关注区域的目标检测对抗攻击方法[J]. 计算机工程与应用, 2023, 59(2): 261-270. |
[10] | 曹男, 刁云峰, 黄垠钦, 杜润, 李怀仙, 程天健. 面向骨架行为识别的角空间对抗攻击方法[J]. 计算机工程与应用, 2023, 59(14): 260-267. |
[11] | 鲍先富, 强赞霞, 杨关. 功能解耦和谱特征融合的雪霾消除模型[J]. 计算机工程与应用, 2023, 59(13): 211-219. |
[12] | 陈积泽, 姜晓燕, 高永彬. 基于门机制注意力模型的文本生成图像方法[J]. 计算机工程与应用, 2023, 59(12): 208-216. |
[13] | 王博伟, 邓君, 吕斌. 基于WCGAN的出租车需求热点预测[J]. 计算机工程与应用, 2023, 59(12): 293-300. |
[14] | 高文超, 任圣博, 田驰, 赵珊珊. 多层次生成对抗网络的动画头像生成方法研究[J]. 计算机工程与应用, 2022, 58(9): 230-237. |
[15] | 王照乾, 孔韦韦, 滕金保, 田乔鑫. DenseNet生成对抗网络低照度图像增强方法[J]. 计算机工程与应用, 2022, 58(8): 214-220. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||