[1] SZEGEDY C , ZAREMBA W , SUTSKEVER I , et al. Intriguing properties of neural networks[J]. arXiv.1312.6199, 2013.
[2] CHEN P Y, ZHANG H, SHARMA Y, et al. ZOO[C]//Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. New York: ACM, 2017: 15-26.
[3] ILYAS A, ENGSTROM L, ATHALYE A, et al. Black-box adversarial attacks with limited queries and information[C]//Proceedings of the International Conference on Machine Learning, 2018: 2137-2146.
[4] GOODFELLOW I J, SHLENS J , SZEGEDY C . Explaining and harnessing adversarial examples[J]. arXiv.1412.6572, 2014.
[5] PAPERNOT N, MCDANIEL P, GOODFELLOW I, et al. Practical black-box attacks against machine learning[C]//Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. New York: ACM, 2017: 506-519.
[6] OREKONDY T, SCHIELE B, FRITZ M. Knockoff nets: stealing functionality of black-box models[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 4954-4963.
[7] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]//Advances in Neural Information Processing Systems, 2014.
[8] ZHOU M, WU J, LIU Y, et al. DaST: data-free substitute training for adversarial attacks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 234-243.
[9] ODENA A, OLAH C, SHLENS J, et al. Conditional image synthesis with auxiliary classifier GANs[C]//Proceedings of the 34th International Conference on Machine Learning - Volume 70. New York: ACM, 2017: 2642-2651.
[10] KURAKIN A, GOODFELLOW I, BENGIO S. Adversarial examples in the physical world[J]. arXiv:1607.02533, 2016.
[11] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[J]. arXiv:1706.06083, 2017.
[12] CHENG M H, LE T, CHEN P Y, et al. Query-efficient hard-label black-box attack: an optimization-based approach[J]. arXiv:1807.04457, 2018.
[13] BRENDEL W, RAUBER J, BETHGE M. Decision-based adversarial attacks: reliable attacks against black-box machine learning models[J]. arXiv:1712.04248, 2017.
[14] FANG G, SONG J, SHEN C, et al. Data-free adversarial distillation[J]. arXiv:1912.11006, 2019.
[15] TRUONG J B, MAINI P, WALLS R J, et al. Data-free model extraction[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 4771-4780.
[16] WANG W, YIN B, YAO T, et al. Delving into data: effectively substitute training for black-box attack[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 4761-4770.
[17] KARIYAPPA S, PRAKASH A, QURESHI M K. MAZE: data-free model stealing attack using zeroth-order gradient estimation[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 13809-13818.
[18] ZHANG J, CHEN C, LYU L. IDEAL: query-efficient data-free learning from black-box models[C]//Proceedings of the 11th International Conference on Learning Representations, 2022.
[19] SUN X X, CHENG G, LI H D, et al. Exploring effective data for surrogate training towards black-box attack[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 15355-15364.
[20] DING G W, WANG L, JIN X. advertorch v0. 1: an adversarial robustness toolbox based on PyTorch[J]. arXiv:1902.07623, 2019.
[21] ZHANG J, LI B, XU J H, et al. Towards efficient data free blackbox adversarial attack[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 15115-15125. |