BAI Zhixu, WANG Hengjun, GUO Kexiang. Summary of Adversarial Examples Techniques Based on Deep Neural Networks[J]. Computer Engineering and Applications, 2021, 57(23): 61-70.
[1] CUBUK E D,ZOPH B,MANE D,et al.AutoAugment:learning augmentation strategies from data[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2019.
[2] EHMER M,F KHAN.A comparative study of white box,black box and grey box testing techniques[J].International Journal of Advanced Computer Science & Applications,2012,3(6):1-12.
[3] PAPERNOT N,MCDANIEL P,GOODFELLOW I,et al.Practical black-box attacks against machine learning[C]//Proceedings of the 2017 ACM Asia Conference on Computer and Communications Security,2017:506-519.
[4] SZEGEDY C,ZAREMBA W,SUTSKEVER I,et al.Intriguing properties of neural networks[J].arXiv:1312. 6199,2013.
[5] ROZSA A,RUDD E M,BOULT T E.Adversarial diversity and hard positive generation[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops(CVPRW),2016.
[6] GOODFELLOW I J,SHLENS J,SZEGEDY C.Explaining and harnessing adversarial examples[J].arXiv:1412.6572,2014.
[7] MAHENDRAN A,VEDALDI A.Understanding deep image representations by inverting them[J].arXiv:1412.0035v1,2014.
[8] ILYAS A,ENGSTROM L,ATHALYE A,et al.Black-box adversarial attacks with limited queries and information[C]//International Conference on Machine Learning,2018:2137-2146.
[9] WIERSTRA D,SCHAUL T,GLASMACHERS T,et al.Natural evolution strategies[J].The Journal of Machine Learning Research,2014,15(1):949-980.
[10] ASSION F,SCHLICHT P,GRESNER F,et al.The attack generator:a systematic approach towards constructing adversarial attacks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops.Piscataway:IEEE,2019:1370-1379.
[11] LU J,SIBAI H,FABRY E.Adversarial examples that fool detectors[J].arXiv:1712.02494,2017.
[12] PAPERNOT N,MCDANIEL P,GOODFELLOW I,et al.Practical black-box attacks against machine learning[C]//Proceedings of the IEEE European Symposium on Security and Privacy,2016:506-519.
[13] KURAKIN A,GOODFELLOW I,BENGIO S.Adversarial examples in the physical world[J].arXiv:1607.02533,2016.
[14] DONG Y,LIAO F,PANG T,et al.Boosting adversarial attacks with momentum[J].arXiv:1710.06081,2017.
[15] MOOSAVI-DEZFOOLI S M,FAWZI A,FROSSARD P.DeepFool:a simple and accurate method to fool deep neural networks[C]//Proc of the IEEE Conf on Computer Vision and Pattern Recognition(CVPR),2016:2574-2582.
[16] CARLINI N,WAGNER D.Towards evaluating the robustness of neural networks[C]//Proc of the 2017 IEEE Symp on Security and Privacy(SP),2017:39-57.
[17] KURAKIN A,GOODFELLOW I,BENGIO S.Adversarial examples in the physical world[J].arXiv:1607.02533,2016.
[18] PAPERNOT N,MCDANIEL P,JHA S,et al.The limitations of deep learning in adversarial settings[C]//Proc of the IEEE European Symp on Security and Privacy,2016:372-387.
[19] HU W,TAN Y.Black-box attacks against RNN based malware detection algorithms[J].arXiv:1705.08131,2017.
[20] ROZSA A,RUDD E M,BOULT T E.Adversarial diversity and hard positive generation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,Las Vegas,USA,2016:410-417.
[21] SHI Y,WANG S,HAN Y.Curls&Whey:boosting black-box adversarial attacks[J].arXiv:1904.01160,2019.
[22] BARTLETT P L,WEGKAMP M H.Classification with a reject option using a hinge loss[J].Journal of Machine Learning Research,2008(9):1823-1840.
[23] SARKAR S,BANSAL A,MAHBUB U,et al.UPSET and ANGRI:breaking high performance image classifiers[J].arXiv:1707.01159,2017.
[24] XIE C,WANG J,ZHANG Z,et al.Adversarial examples for semantic segmentation and object detection[C]//Proceedings of the IEEE International Conference on Computer Vision,Venice,Italy,2017:1378-1387.
[25] KARMON D,ZORAN D,GOLDBERG Y.LaVAN:localized and visible adversarial noise[J].arXiv:1801.02608,2018.
[26] SHARIF M,BHAGAVATULA S,BAUER L,et al.Accessorize to a crime:real and stealthy attacks on state-of-the-art face recognition[C]//Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security,2016:1528-1540.
[27] LIU A S,LIU X L,FAN J X,et al.Perceptual-sensitive GAN for generating adversarial patches[C]//Proceedings of the AAAI Conference on Artificial Intelligence,2019:1028-1035.
[28] SU J,VARGAS D V,KOUICHI S.One pixel attack for fooling deep neural networks[J].arXiv:1710.08864,2017.
[29] BALUJA S,FISCHER I.Adversarial transformation networks:learning to generate adversarial examples[J].arXiv:1703.09387,2017.
[30] MOOSAVIDEZFOOLI S,FAWZI A,FAWZI O,et al.Universal adversarial perturbations[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Piscataway:IEEE,2017:86-94.
[31] BROWN T B,MANé D,ROY A,et al.Adversarial patch[J].arXiv:1712.09665,2017.
[32] THYS S,VAN RANST W,GOEDEMé T.Fooling automated surveillance cameras:adversarial patches to attack person detection[J].arXiv:1904.08653,2019.
[33] CROCE F,HEIN M.Sparse and imperceivable adversarial attacks[C]//Proceedings of the IEEE International Conference on Computer Vision,2019:4724-4732.
[34] DENG L.The MNIST database of handwritten digit images for machine learning research[Best of the Web][J].IEEE Signal Processing Magazine,2012,29(6):141-142.
[35] DENG J,DONG W,SOCHER R,et al.ImageNet:a large-scale hierarchical image database[C]//Proc of the IEEE Conf on Computer Vision and Pattern Recognition(CVPR),2009:248-255.
[36] ZHAO Q,GRIFFIN L D.Suppressing the unusual:towards robust CNNs using symmetric activation functions[J].arXiv:1603.05145,2016.
[37] HUANG R,XU B,SCHUURMANS D,et al.Learning with a strong adversary[J].arXiv:1511.03034,2015.
[38] TRAMèR F,KURAKIN A,PAPERNOT N,et al.Ensemble adversarial training:attacks and defenses[J].arXiv:1705. 07204,2017.
[39] 王丹妮,陈伟,羊洋,等.基于高斯增强和迭代攻击的对抗训练防御方法[J].计算机科学,2021,48(S1):509-513.WANG D N,CHEN W,YANG Y,et al.An adversarial training defense method based on Gaussian enhancement and iterative attacks[J].Computer Science,2021,48(S1):509-513.
[40] PAPERNOT N,MCDANIEL P,GOODFELLOW I,et al.Practical black-box attacks against machine learning[C]//Proceedings of the IEEE European Symposium on Security and Privacy,Saarbrücken,Germany,2016:506-519.
[41] 董胤蓬,苏航,朱军.面向对抗样本的深度神经网络可解释性分析[J/OL].自动化学报:1-14[2021-04-27].https://doi.org/10.16383/j.aas.c200317.
DONG Y PE,SU H,ZHU J.Towards interpretable deep neural networks by leveraging adversarial examples[J/OL].Acta Automatica Sinica:1-14[2021-04-27].https://doi.org/10.16383/j.aas.c200317.