[1] LI H, HUANG H, CHEN L, et al. Adversarial examples for CNN-based SAR image classification: an experience study[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2020, 14: 1333-1347.
[2] ROY A M, BOSE R, BHADURI J. A fast accurate fine-grain object detection model based on YOLOv4 deep neural network[J]. Neural Computing and Applications, 2022, 34: 3895-3921.
[3] LI L, MU X, LI S, et al. A review of face recognition technology[J]. IEEE Access, 2020, 8: 139110-139120.
[4] GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples[J]. arXiv:1412.6572, 2014.
[5] PAPERNOT N, MCDANIEL P, JHA S, et al. The limitations of deep learning in adversarial settings[C]//Proceedings of the IEEE European Symposium on Security and Privacy, 2016: 372-387.
[6] ZHENG H, ZHANG Z, GU J, et al. Efficient adversarial training with transferable adversarial examples[C]//Proceedings of Conference on Computer Vision and Pattern Recognition, 2020: 1181-1190.
[7] WANG X, HE X, WANG J, et al. Admix: enhancing the transferability of adversarial attacks[C]//Proceedings of the International Conference on Computer Vision, 2021: 16158-16167.
[8] DEMONTIS A, MELIS M, PINTOR M, et al. Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks[C]//Proceedings of the 28th USENIX Security Symposium, 2019: 321-338.
[9] WANG Z, GUO H, ZHANG Z, et al. Feature importance-aware transferable adversarial attacks[C]//Proceedings of the International Conference on Computer Vision, 2021: 7639-7648.
[10] YANG K Y, YAU J H, LI F F, et al. A study of face obfuscation in imagenet[C]//Proceedings of the 39th International Conference on Machine Learning, 2022: 25313-25330.
[11] KURAKIN A, GOODFELLOW I, BENGIO S. Adversarial examples in the physical world[J]. arXiv:1607.02533, 2016.
[12] DONG Y, LIAO F, PANG T, et al. Boosting adversarial attacks with momentum[C]//Proceedings of the Conference on Computer Vision and Pattern Recognition, 2018: 9185-9193.
[13] LIN J, SONG C, HE K, et al. Nesterov accelerated gradient and scale invariance for adversarial attacks[J]. arXiv:1908. 06281, 2019.
[14] XIE C, ZHANG Z, ZHOU Y, et al. Improving transferability of adversarial examples with input diversity[C]//Proceedings of the Conference on Computer Vision and Pattern Recognition, 2019: 2730-2739.
[15] DONG Y, PANG T, SU H, et al. Evading defenses to transferable adversarial examples by translation-invariant attacks[C]//Proceedings of the Conference on Computer Vision and Pattern Recognition, 2019: 4312-4321.
[16] XIE C, WANG J, ZHANG Z, et al. Mitigating adversarial effects through randomization[J]. arXiv:1711.01991, 2017.
[17] NASEER M, KHAN S, HAYAT M, et al. A self-supervised approach for adversarial robustness[C]//Proceedings of the Conference on Computer Vision and Pattern Recognition, 2020: 262-271.
[18] XU W, EVANS D, QI Y. Feature squeezing: detecting adversarial examples in deep neural networks[J]. arXiv:1704. 01155, 2017.
[19] COHEN J, ROSENFELD E, KOLTER Z. Certified adversarial robustness via randomized smoothing[C]//Proceedings of the 36th International Conference on Machine Learning, 2019: 1310-1320.
[20] LI B, CHEN C, WANG W, et al. Certified adversarial robustness with additive noise[J]. arXiv:1809.03113, 2018. |