XIONG Su, LING Jie. Dynamic Defense Method Against Adversarial Example Attacks Based on Siamese Structure[J]. Computer Engineering and Applications, 2022, 58(17): 230-238.
[1] HE K,ZHANG X,REN S,et al.Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2016:770-778.
[2] HUANG G,LIU Z,VAN DER MAATEN L,et al.Densely connected convolutional networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2017:4700-4708.
[3] TIAN Y,PEI K,JANA S,et al.Deeptest:automated testing of deep-neural-network-driven autonomous cars[C]//Proceedings of the 40th International Conference on Software Engineering,2018:303-314.
[4] FAYJIE A R,HOSSAIN S,OUALID D,et al.Driverless car:autonomous driving using deep reinforcement learning in urban environment[C]//2018 15th International Conference on Ubiquitous Robots(UR),2018:896-901.
[5] DENG Y,BAO F,KONG Y,et al.Deep direct reinforcement learning for financial signal representation and trading[J].IEEE Transactions on Neural Networks and Learning Systems,2016,28(3):653-664.
[6] SZEGEDY C,ZAREMBA W,SUTSKEVER I,et al.Intriguing properties of neural networks[C]//Proceedings of the 2nd International Conference on Learning Representations,2014.
[7] NGUYEN A,YOSINSKI J,CLUNE J.Deep neural networks are easily fooled:high confidence predictions for unrecognizable images[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2015:427-436.
[8] GOODFELLOW I J,SHLENS J,SZEGEDY C.Explaining and harnessing adversarial examples[J].arXiv:1412.6572,2014.
[9] HENDRIK METZEN J,CHAITHANYA KUMAR M,BROX T,et al.Universal adversarial perturbations against semantic image segmentation[C]//Proceedings of the IEEE International Conference on Computer Vision,2017:2755-2764.
[10] PAPERNOT N,MCDANIEL P,GOODFELLOW I,et al.Practical black-box attacks against machine learning[C]//Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security,2017:506-519.
[11] PAPERNOT N,MCDANIEL P,JHA S,et al.The limitations of deep learning in adversarial settings[C]//2016 IEEE European Symposium on Security and Privacy (EuroS&P),2016:372-387.
[12] WU Y,BAMMAN D,RUSSELL S.Adversarial training for relation extraction[C]//Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,2017:1778-1783.
[13] MOOSAVI-DEZFOOLI S M,FAWZI A,FROSSARD P.Deepfool:a simple and accurate method to fool deep neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Piscataway,NJ:IEEE Press,2016:2574-2582.
[14] JIA X,WEI X,CAO X,Et al.Comdefend:an efficient image compression model to defend adversarial examples[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2019:6084-6092.
[15] XU W,EVANS D,QI Y.Feature squeezing:detecting adversarial examples in deep neural networks[C]//Network and Distributed System Security Symposium,2017.
[16] LIAO F,LIANG M,DONG Y,et al.Defense against adversarial attacks using high-level representation guided denoiser[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2018:1778-1787.
[17] YUAN X,HE P,ZHU Q,et al.Adversarial examples:attacks and defenses for deep learning[J].IEEE Transactions on Neural Networks and Learning Systems,2019,30(9):2805-2824.
[18] HENDRYCKS D,GIMPEL K.Early methods for detecting adversarial images[J].arXiv:1608.00530,2016.
[19] AKHTAR N,MIAN A.Threat of adversarial attacks on deep learning in computer vision:a survey[J].IEEE Access,2018,6:14410-14430.
[20] CHOPRA S,HADSELL R,LECUN Y.Learning a similarity metric discriminatively,with application to face verification[C]//2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR’05),2005:539-546.
[21] GOODFELLOW I,LEE H,LE Q,et al.Measuring invariances in deep networks[C]//Advances in Neural Information Processing Systems,2009:646-654.
[22] LECUN Y,BOSER B,DENKER J S,et al.Backpropagation applied to handwritten zip code recognition[J].Neural Computation,1989,1(4):541-551.
[23] CARLINI N,WAGNER D.Adversarial examples are not easily detected:bypassing ten detection methods[C]//Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,2017:3-14.