计算机工程与应用 ›› 2022, Vol. 58 ›› Issue (23): 24-41.DOI: 10.3778/j.issn.1002-8331.2205-0520
闫嘉乐,徐洋,张思聪,李克资
出版日期:
2022-12-01
发布日期:
2022-12-01
YAN Jiale, XU Yang, ZHANG Sicong, LI Kezi
Online:
2022-12-01
Published:
2022-12-01
摘要: 深度学习模型在图像分类领域的能力已经超越了人类,但不幸的是,研究发现深度学习模型在对抗样本面前非常脆弱,这给它在安全敏感的系统中的应用带来了巨大挑战。图像分类领域对抗样本的研究工作被梳理和总结,以期为进一步地研究该领域建立基本的知识体系,介绍了对抗样本的形式化定义和相关术语,介绍了对抗样本的攻击和防御方法,特别是新兴的可验证鲁棒性的防御,并且讨论了对抗样本存在可能的原因。为了强调在现实世界中对抗攻击的可能性,回顾了相关的工作。在梳理和总结文献的基础上,分析了对抗样本的总体发展趋势和存在的挑战以及未来的研究展望。
闫嘉乐, 徐洋, 张思聪, 李克资. 图像分类模型的对抗样本攻防研究综述[J]. 计算机工程与应用, 2022, 58(23): 24-41.
YAN Jiale, XU Yang, ZHANG Sicong, LI Kezi. Survey of Research on Adversarial Examples Attack and Defense in Image Classification Model[J]. Computer Engineering and Applications, 2022, 58(23): 24-41.
[1] HE K,ZHANG X,REN S,et al.Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2016:770-778. [2] VASWANI A,SHAZEER N,PARMAR N,et al.Attention is all you need[C]//Advances in Neural Information Processing Systems,2017. [3] GRAVES A,JAITLY N.Towards end-to-end speech recog-nition with recurrent neural networks[C]//Proceedings of the International Conference on Machine Learning,2014:1764-1772. [4] CHEN C,SEFF A,KORNHAUSER A,et al.Deepdriving:learning affordance for direct perception in autonomous driving[C]//Proceedings of the IEEE International Conference on Computer Vision,2015:2722-2730. [5] LIAO Y,VAKANSKI A,XIAN M.A deep learning framework for assessing physical rehabilitation exercises[J].IEEE Transactions on Neural Systems and Rehabilitation Engineering,2020,28(2):468-477. [6] KRIZHEVSKY A,SUTSKEVER I,HINTON G E.Image-Net classification with deep convolutional neural networks[C]//Advances in Neural Information Processing Systems,2012. [7] SZEGEDY C,ZAREMBA W,SUTSKEVER I,et al.Intriguing properties of neural networks[C]//Proceedings of the International Conference on Learning Representations,2014. [8] CARLINI N,WAGNER D.Audio adversarial examples:targeted attacks on speech-to-text[C]//2018 IEEE Security and Privacy Workshops(SPW),2018:1-7. [9] EBRAHIMI J,RAO A,LOWD D,et al.Hotflip:white-box adversarial examples for text classification[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics(Volume 2:Short Papers),2018. [10] GROSSE K,PAPERNOT N,MANOHARAN P,et al.Adversarial examples for malware detection[C]//European Symposium on Research in Computer Security.Cham:Springer,2017:62-79. [11] 潘文雯,王新宇,宋明黎,等.对抗样本生成技术综述[J].软件学报,2020,31(1):67-81. PAN W W,WANG X Y,SONG M L,et al.Survey on generating adversarial examples[J].Journal of Software,2020,31(1):67-81. [12] 陈梦轩,张振永,纪守领,等.图像对抗样本研究综述[J].计算机科学,2022,49(2):92-106. CHEN M X,ZHANG Z Y,JI S L,et al.Survey of research progress on adversarial examples in images[J].Computer Science,2022,49(2):92-106. [13] 白祉旭,王衡军,郭可翔.基于深度神经网络的对抗样本技术综述[J].计算机工程与应用,2021,57(23):61-70. BAI Z X,WANG H J,GUO K X.Summary of adversarial examples techniques based on deep neural networks[J].Computer Engineering and Applications,2021,57(23):61-70. [14] GOODFELLOW I J,SHLENS J,SZEGEDY C.Explaining and harnessing adversarial examples[C]//Proceedings of the International Conference on Learning Representations,2015. [15] KURAKIN A,GOODFELLOW I,BENGIO S.Adversarial examples in the physical world[C]//Proceedings of the International Conference on Learning Representations Workshop Track,2017. [16] DONG Y,LIAO F,PANG T,et al.Boosting adversarial attacks with momentum[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2018:9185-9193. [17] MADRY A,MAKELOV A,SCHMIDT L,et al.Towards deep learning models resistant to adversarial attacks[C]//Proceedings of the International Conference on Learning Representations,2018. [18] WANG X,HE K.Enhancing the transferability of adversarial attacks through variance tuning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2021:1924-1933. [19] PAPERNOT N,MCDANIEL P,JHA S,et al.The limitations of deep learning in adversarial settings[C]//2016 IEEE European Symposium on Security and Privacy(EuroS&P),2016:372-387. [20] CARLINI N,WAGNER D.Towards evaluating the robustness of neural networks[C]//2017 IEEE Symposium on Security and Privacy(SP),2017:39-57. [21] ZHU M,CHEN T,WANG Z.Sparse and imperceptible adversarial attack via a homotopy algorithm[C]//Proceedings of the International Conference on Machine Learning,2021:12868-12877. [22] MOOSAVI-DEZFOOLI S M,FAWZI A,FROSSARD P.Deepfool:a simple and accurate method to fool deep neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2016:2574-2582. [23] MOOSAVI-DEZFOOLI S M,FAWZI A,FAWZI O,et al.Universal adversarial perturbations[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2017:1765-1773. [24] PAPERNOT N,MCDANIEL P,GOODFELLOW I,et al.Practical black-box attacks against machine learning[C]//Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security,2017:506-519. [25] LIU Y,CHEN X,LIU C,et al.Delving into transferable adversarial examples and black-box attacks[C]//Proceedings of the International Conference on Learning Representations,2017. [26] SHI Y,WANG S,HAN Y.Curls & Whey:boosting black-box adversarial attacks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2019:6519-6527. [27] HUANG Z,ZHANG T.Black-box adversarial attack with transferable model-based embedding[C]//Proceedings of the International Conference on Learning Representations,2020. [28] WANG Z,GUO H,ZHANG Z,et al.Feature importance-aware transferable adversarial attacks[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision,2021:7639-7648. [29] CHEN P Y,ZHANG H,SHARMA Y,et al.ZOO:zeroth order optimization based black-box attacks to deep neural networks without training substitute models[C]//Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,2017:15-26. [30] ILYAS A,ENGSTROM L,ATHALYE A,et al.Black-box adversarial attacks with limited queries and information[C]//Proceedings of the International Conference on Machine Learning,2018:2137-2146. [31] TU C C,TING P,CHEN P Y,et al.Autozoom:autoencoder-based zeroth order optimization method for attacking black-box neural networks[C]//Proceedings of the AAAI Conference on Artificial Intelligence,2019,33(1):742-749. [32] SU J,VARGAS D V,SAKURAI K.One pixel attack for fooling deep neural networks[J].IEEE Transactions on Evolutionary Computation,2019,23(5):828-841. [33] DU J,ZHANG H,ZHOU J T,et al.Query-efficient meta attack to deep neural networks[C]//Proceedings of the International Conference on Learning Representations,2020. [34] MA C,CHEN L,YONG J H.Simulating unknown target models for query-efficient black-box attacks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2021:11835-11844. [35] BRENDEL W,RAUBER J,BETHGE M.Decision-based adversarial attacks:reliable attacks against black-box machine learning models[C]//Proceedings of the International Conference on Learning Representations,2018. [36] CHENG M,LE T,CHEN P Y,et al.Query-efficient hard-label black-box attack:an optimization-based approach[C]//Proceedings of the International Conference on Learning Representations,2019. [37] CHEN J,JORDAN M I,WAINWRIGHT M J.Hopskipjumpattack:a query-efficient decision-based attack[C]//Proceedings of the IEEE Symposium on Security and Privacy(SP),2020:1277-1294. [38] LI H,XU X,ZHANG X,et al.QEBA:query-efficient boundary-based blackbox attack[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2020:1221-1230. [39] LI H,LI L,XU X,et al.Nonlinear projection based gradient estimation for query efficient blackbox attacks[C]//Proceedings of the International Conference on Artificial Intelligence and Statistics,2021:3142-3150. [40] EYKHOLT K,EVTIMOV I,FERNANDES E,et al.Robust physical-world attacks on deep learning visual classification[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2018:1625-1634. [41] KONG Z,GUO J,LI A,et al.PhysGAN:generating physical-world-resilient adversarial examples for autonomous driving[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2020:14254-14263. [42] CAO Y,XIAO C,CYR B,et al.Adversarial sensor attack on lidar-based perception in autonomous driving[C]//Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security,2019:2267-2281. [43] CAO Y,XIAO C,YANG D,et al.Adversarial objects against lidar-based autonomous driving systems[J].arXiv:1907.05418,2019. [44] THYS S,VAN RANST W,GOEDEMé T.Fooling automated surveillance cameras:adversarial patches to attack person detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops,2019. [45] BROWN T B,MANé D,ROY A,et al.Adversarial patch[C]//Advances in Neural Information Processing Systems,2017. [46] DUAN R,MA X,WANG Y,et al.Adversarial camouflage:hiding physical-world attacks with natural styles[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2020:1000-1008. [47] GOODFELLOW I,POUGET-ABADIE J,MIRZA M,et al.Generative adversarial nets[C]//Advances in Neural Information Processing Systems,2014. [48] REDMON J,FARHADI A.YOLO9000:better,faster,stronger[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2017:7263-7271. [49] HO C H,LEUNG B,SANDSTROM E,et al.Catastrophic child’s play:easy to perform,hard to defend adversarial attacks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2019:9229-9237. [50] JING Y,YANG Y,FENG Z,et al.Neural style transfer:a review[J].IEEE Transactions on Visualization and Computer Graphics,2019,26(11):3365-3385. [51] GU S,RIGAZIO L.Towards deep neural network architectures robust to adversarial examples[C]//Proceedings of the International Conference on Learning Representations Workshop,2015. [52] SONG Y,KIM T,NOWOZIN S,et al.Pixeldefend:leveraging generative models to understand and defend against adversarial examples[C]//Proceedings of the International Conference on Learning Representations,2018. [53] FAWZI A,FAWZI O,FROSSARD P.Fundamental limits on adversarial robustness[C]//Proceedings of the International Conference on Machine Learning Workshop on Deep Learning,2015. [54] TABACOF P,VALLE E.Exploring the space of adversarial images[C]//Proceedings of the International Joint Conference on Neural Networks(IJCNN),2016:426-433. [55] BUCKMAN J,ROY A,RAFFEL C,et al.Thermometer encoding:one hot way to resist adversarial examples[C]//Proceedings of the International Conference on Learning Representations,2018. [56] TANAY T,GRIFFIN L.A boundary tilting persepective on the phenomenon of adversarial examples[J].arXiv:1608.07690,2016. [57] GILMER J,METZ L,FAGHRI F,et al.Adversarial spheres[C]//Proceedings of the International Conference on Learning Representations,2018. [58] MAHLOUJIFAR S,DIOCHNOS D I,MAHMOODY M.The curse of concentration in robust learning:evasion and poisoning attacks from concentration of measure[C]//Proceedings of the AAAI Conference on Artificial Intelligence,2019:4536-4543. [59] SHAFAHI A,HUANG W R,STUDER C,et al.Are adversarial examples inevitable?[C]//Proceedings of the International Conference on Learning Representations,2019. [60] FAWZI A,FAWZI H,FAWZI O.Adversarial vulnerability for any classifier[C]//Advances in Neural Information Processing Systems,2018. [61] GHOSH P,LOSALKA A,BLACK M J.Resisting adversarial attacks using Gaussian mixture variational autoencoders[C]//Proceedings of the AAAI Conference on Artificial Intelligence,2019:541-548. [62] SCHMIDT L,SANTURKAR S,TSIPRAS D,et al.Adversarially robust generalization requires more data[C]//Advances in Neural Information Processing Systems,2018. [63] ILYAS A,SANTURKAR S,TSIPRAS D,et al.Adversarial examples are not bugs,they are features[C]//Advances in Neural Information Processing Systems,2019. [64] DING G W,LUI K Y C,JIN X,et al.On the sensitivity of adversarial robustness to input data distributions[C]//Proceedings of the International Conference on Learning Representations,2019. [65] SONG C,HE K,WANG L,et al.Improving the generalization of adversarial training with domain adaptation[C]//Proceedings of the International Conference on Learning Representations,2019. [66] WANG Y,ZOU D,YI J,et al.Improving adversarial robustness requires revisiting misclassified examples[C]//Proceedings of the International Conference on Learning Representations,2020. [67] VIVEK B S,BABU R V.Single-step adversarial training with dropout scheduling[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2020:947-956. [68] SONG C,HE K,LIN J,et al.Robust local features for improving the generalization of adversarial training[C]//Proceedings of the International Conference on Learning Representations,2020. [69] ZHENG H,ZHANG Z,GU J,et al.Efficient adversarial training with transferable adversarial examples[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2020:1181-1190. [70] DONG Y,DENG Z,PANG T,et al.Adversarial distributional training for robust deep learning[C]//Advances in Neural Information Processing Systems,2020,33:8270-8283. [71] JIA X,ZHANG Y,WU B,et al.LAS-AT:adversarial training with learnable attack strategy[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2022:13398-13408. [72] XIE C,YUILLE A.Intriguing properties of adversarial training at scale[C]//Proceedings of the International Conference on Learning Representations,2020. [73] WONG E,RICE L,KOLTER J Z.Fast is better than free:revisiting adversarial training[C]//Proceedings of the International Conference on Learning Representations,2020. [74] ANDRIUSHCHENKO M,FLAMMARION N.Understanding and improving fast adversarial training[C]//Advances in Neural Information Processing Systems,2020:16048-16059. [75] WU T,TONG L,VOROBEYCHIK Y.Defending against physically realizable attacks on image classification[C]//Proceedings of the International Conference on Learning Representations,2020. [76] PANG T,XU K,DONG Y,et al.Rethinking softmax cross-entropy loss for adversarial robustness[C]//Proceedings of the International Conference on Learning Representations,2020. [77] XIAO C,ZHONG P,ZHENG C.Enhancing adversarial defense by k-winners-take-all[C]//Proceedings of the International Conference on Learning Representations,2020. [78] LIU Z,LIU Q,LIU T,et al.Feature distillation:DNN-oriented JPEG compression against adversarial examples[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2019:860-868. [79] DAS N,SHANBHOGUE M,CHEN S T,et al.Keeping the bad guys out:protecting and vaccinating deep learning with JPEG compression[J].arXiv:1705.02900,2017. [80] GUO C,RANA M,CISSE M,et al.Countering adversarial images using input transformations[C]//Proceedings of the International Conference on Learning Representations,2018. [81] RAFF E,SYLVESTER J,FORSYTH S,et al.Barrage of random transforms for adversarially robust defense[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2019:6528-6537. [82] XIE C,WANG J,ZHANG Z,et al.Adversarial examples for semantic segmentation and object detection[C]//Proceedings of the IEEE International Conference on Computer Vision,2017:1369-1378. [83] WANG Q,GUO W,ZHANG K,et al.Learning adversary-resistant deep neural networks[J].arXiv:1612.01401,2016. [84] ZANTEDESCHI V,NICOLAE M I,RAWAT A.Efficient defenses against adversarial attacks[C]//Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,2017:39-49. [85] SAMANGOUEI P,KABKAB M,CHELLAPPA R.Defense-GAN:protecting classifiers against adversarial attacks using generative models[C]//Proceedings of the International Conference on Learning Representations,2018. [86] GUPTA P,RAHTU E.Ciidefence:defeating adversarial attacks by fusing class-specific image inpainting and image denoising[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision,2019:6708-6717. [87] AKHTAR N,LIU J,MIAN A.Defense against universal adversarial perturbations[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2018:3389-3398. [88] XU W,EVANS D,QI Y.Feature squeezing:detecting adversarial examples in deep neural networks[C]//Proceedings of Network and Distributed System Security Symposium,2018. [89] MENG D,CHEN H.Magnet:a two-pronged defense against adversarial examples[C]//Proceedings of the ACM SIGSAC Conference on Computer and Communications Security,2017:135-147. [90] LIANG B,LI H,SU M,et al.Detecting adversarial examples in deep networks with adaptive noise reduction[J].arXiv:1705.08378,2017. [91] FEINMAN R,CURTIN R R,SHINTRE S,et al.Detecting adversarial samples from artifacts[J].arXiv:1703.00410,2017. [92] KATZ G,BARRETT C,DILL D L,et al.Reluplex:an efficient SMT solver for verifying deep neural networks[C]//Proceedings of the International Conference on Computer Aided Verification.Cham:Springer,2017:97-117. [93] TJENG V,XIAO K,TEDRAKE R.Evaluating robustness of neural networks with mixed integer programming[C]//Proceedings of the International Conference on Learning Representations,2019. [94] HEIN M,ANDRIUSHCHENKO M.Formal guarantees on the robustness of a classifier against adversarial manipulation[C]//Advances in Neural Information Processing Systems,2017. [95] RAGHUNATHAN A,STEINHARDT J,LIANG P.Certified defenses against adversarial examples[C]//Proceedings of the International Conference on Learning Representations,2018. [96] WONG E,KOLTER Z.Provable defenses against adversarial examples via the convex outer adversarial polytope[C]//Proceedings of the International Conference on Machine Learning,2018:5286-5295. [97] MIRMAN M,GEHR T,VECHEV M.Differentiable abstract interpretation for provably robust neural networks[C]//Proceedings of the International Conference on Machine Learning,2018:3578-3586. [98] XIAO K Y,TJENG V,SHAFIULLAH N M,et al.Training for faster adversarial robustness verification via inducing relu stability[C]//Proceedings of the International Conference on Learning Representations,2019. [99] CROCE F,HEIN M.Provable robustness against all adversarial lp-perturbations for p≥1[C]//Proceedings of the International Conference on Learning Representations,2020. [100] TRAMER F,BONEH D.Adversarial training and robustness for multiple perturbations[C]//Advances in Neural Information Processing Systems,2019. [101] JIA J,CAO X,WANG B,et al.Certified robustness for top-k predictions against adversarial perturbations via randomized smoothing[C]//Proceedings of the International Conference on Learning Representations,2020. [102] CAO X,GONG N Z.Mitigating evasion attacks to deep neural networks via region-based classification[C]//Proceedings of the 33rd Annual Computer Security Applications Conference,2017:278-287. [103] ZHAI R,DAN C,HE D,et al.MACER:attack-free and scalable robust training via maximizing certified radius[C]//Proceedings of the International Conference on Learning Representations,2020. [104] FISCHER M,BAADER M,VECHEV M.Certified defense to image transformations via randomized smoothing[C]//Advances in Neural Information Processing Systems,2020:8404-8417. [105] ZHANG D,YE M,GONG C,et al.Black-box certification with randomized smoothing:a functional optimization based framework[C]//Advances in Neural Information Processing Systems,2020:2316-2326. [106] CHIANG P Y,NI R,ABDELKADER A,et al.Certified defenses for adversarial patches[C]//Proceedings of the International Conference on Learning Representations,2020. [107] AWASTHI P,JAIN H,RAWAT A S,et al.Adversarial robustness via robust low rank representations[C]//Advances in Neural Information Processing Systems,2020:11391-11403. [108] TRAMER F,CARLINI N,BRENDEL W,et al.On adaptive attacks to adversarial example defenses[C]//Advances in Neural Information Processing Systems,2020:1633-1645. [109] PAUTOV M,TURSYNBEK N,MUNKHOEVA M,et al.CC-Cert:a probabilistic approach to certify general robustness of neural networks[C]//Proceedings of the AAAI Conference on Artificial Intelligence,2022:7975-7983. [110] GUESMI A,KHASAWNEH K N,ABU-GHAZALEH N,et al.ROOM:adversarial machine learning attacks under real-time constraints[J].arXiv:2201.01621,2022. [111] DUNCAN K,KOMENDANTSKAYA E,STEWART R,et al.Relative robustness of quantized neural networks against adversarial attacks[C]//Proceedings of the International Joint Conference on Neural Networks(IJCNN),2020:1-8. [112] HUANG Y,YU Y,ZHANG H,et al.Adversarial robustness of stabilized neural ode might be from obfuscated gradients[C]//Proceedings of the Mathematical and Scientific Machine Learning,2022:497-515. [113] MAHMOOD K,MAHMOOD R,VAN DIJK M.On the robustness of vision transformers to adversarial examples[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision,2021:7838-7847. [114] BHOJANAPALLI S,CHAKRABARTI A,GLASNER D,et al.Understanding robustness of transformers for image classification[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision,2021:10231-10241. [115] SHAMIR A,MELAMED O,BENSHMUEL O.The dimpled manifold model of adversarial examples in machine learning[J].arXiv:2106.10151,2021. [116] SUN J,JIANG T,LI C,et al.Searching for robust neural architectures via comprehensive and reliable evaluation[J].arXiv:2203.03128,2022. [117] CAO G,WANG Z,DONG X,et al.Vanilla feature distillation for improving the accuracy-robustness trade-off in adversarial training[J].arXiv:2206.02158,2022. [118] SUN H,WU K,WANG T,et al.Towards fair and robust classification[C]//Proceedings of the IEEE 7th European Symposium on Security and Privacy(EuroS&P),2022:356-376. [119] WANG H,ZHANG A,ZHENG S,et al.Removing batch normalization boosts adversarial training[C]//Proceedings of the International Conference on Machine Learning,2022:23433-23445. [120] 董胤蓬,苏航,朱军.面向对抗样本的深度神经网络可解释性分析[J].自动化学报,2022,48(1):75-86. DONG Y P,SU H,ZHU J.Interpretability analysis of deep neural networks with adversarial examples[J].Acta Automatica Sinica,2022,48(1):75-86. [121] HUANG H,MA X,ERFANI S M,et al.Unlearnable examples:making personal data unexploitable[C]//Proceedings of the International Conference on Learning Representations,2021. [122] ARAMOON O,CHEN P Y,QU G.Don’t forget to sign the gradients![J].arXiv:2103.03701,2021. [123] WANG S,WANG X,CHEN P Y,et al.Characteristic examples:high-robustness,low-transferability fingerprinting of neural networks[C]//Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence,2021:575-582. [124] ZHANG J,ZHANG L,LI G,et al.Adversarial examples for good:adversarial examples guided imbalanced learning[J].arXiv:2201.12356,2022. [125] HSU C Y,CHEN P Y,LU S,et al.Adversarial examples can be effective data augmentation for unsupervised machine learning[C]//Proceedings of the AAAI Conference on Artificial Intelligence,2022. [126] SHAO R,SHI Z,YI J,et al.Robust text captchas using adversarial examples[J].arXiv:2101.02483,2021. [127] FAN L,LIU S,CHEN P Y,et al.When does contrastive learning preserve adversarial robustness from pretraining to finetuning?[C]//Advances in Neural Information Processing Systems,2021:21480-21492. [128] WANG R,XU K,LIU S,et al.On fast adversarial robustness adaptation in model-agnostic meta-learning[C]//Proceedings of the International Conference on Learning Representations,2021. |
[1] | 徐尹翔, 陈祺东, 孙俊. 应用量子行为粒子群优化算法的文本对抗[J]. 计算机工程与应用, 2022, 58(9): 175-180. |
[2] | 罗向龙, 郭凰, 廖聪, 韩静, 王立新. 时空相关的短时交通流宽度学习预测模型[J]. 计算机工程与应用, 2022, 58(9): 181-186. |
[3] | 阿里木·赛买提, 斯拉吉艾合麦提·如则麦麦提, 麦合甫热提, 艾山·吾买尔, 吾守尔·斯拉木, 吐尔根·依不拉音. 神经机器翻译面对句长敏感问题的研究[J]. 计算机工程与应用, 2022, 58(9): 195-200. |
[4] | 宁晨, 谢红薇, 孟丽楠. 融合BOVW和复杂网络的高光谱遥感图像分类[J]. 计算机工程与应用, 2022, 58(9): 219-229. |
[5] | 陈一潇, 阿里甫·库尔班, 林文龙, 袁旭. 面向拥挤行人检测的CA-YOLOv5[J]. 计算机工程与应用, 2022, 58(9): 238-245. |
[6] | 方义秋, 卢壮, 葛君伟. 联合RMSE损失LSTM-CNN模型的股价预测[J]. 计算机工程与应用, 2022, 58(9): 294-302. |
[7] | 高广尚. 深度学习推荐模型中的注意力机制研究综述[J]. 计算机工程与应用, 2022, 58(9): 9-18. |
[8] | 吉梦, 何清龙. AdaSVRG:自适应学习率加速SVRG[J]. 计算机工程与应用, 2022, 58(9): 83-90. |
[9] | 石颉, 袁晨翔, 丁飞, 孔维相. SAR图像建筑物目标检测研究综述[J]. 计算机工程与应用, 2022, 58(8): 58-66. |
[10] | 熊风光, 张鑫, 韩燮, 况立群, 刘欢乐, 贾炅昊. 改进的遥感图像语义分割研究[J]. 计算机工程与应用, 2022, 58(8): 185-190. |
[11] | 杨锦帆, 王晓强, 林浩, 李雷孝, 杨艳艳, 李科岑, 高静. 深度学习中的单阶段车辆检测算法综述[J]. 计算机工程与应用, 2022, 58(7): 55-67. |
[12] | 王志勇, 邢凯, 邓洪武, 李亚鸣, 胡璇. 基于小样本学习和因果干预的ResNeXt对抗攻击[J]. 计算机工程与应用, 2022, 58(7): 68-76. |
[13] | 王斌, 李昕. 融合动态残差的多源域自适应算法研究[J]. 计算机工程与应用, 2022, 58(7): 162-166. |
[14] | 谭暑秋, 汤国放, 涂媛雅, 张建勋, 葛盼杰. 教室监控下学生异常行为检测系统[J]. 计算机工程与应用, 2022, 58(7): 176-184. |
[15] | 张美玉, 刘跃辉, 侯向辉, 秦绪佳. 基于卷积网络的灰度图像自动上色方法[J]. 计算机工程与应用, 2022, 58(7): 229-236. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||