[1] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[C]//Advances in Neural Information Processing Systems, 2012: 84-90.
[2] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 770-778.
[3] ZHANG S, LIU C, JIANG H, et al. Feedforward sequential memory networks: a new structure to learn long-term dependency[J]. arXiv:1512.08301, 2015.
[4] KENTON J D M W C, TOUTANOVA L K. BERT: pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019: 4171-4186.
[5] GU T, LIU K, DOLAN-GAVITT B, et al. BadNets: evaluating backdooring attacks on deep neural networks[J]. IEEE Access, 2019, 7: 47230-47244.
[6] SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[J]. arXiv:1312.6199, 2013.
[7] LU J, ISSARANON T, FORSYTH D. SafetyNet: detecting and rejecting adversarial examples robustly[C]//Proceedings of the IEEE International Conference on Computer Vision, 2017: 446-454.
[8] HONG S, CARLINI N, KURAKIN A. Handcrafted backdoors in deep neural networks[C]//Advances in Neural Information Processing Systems, 2022: 8068-8080.
[9] LI Y, LI Y, WU B, et al. Invisible backdoor attack with sample-specific triggers[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 16463-16472.
[10] NGUYEN A, TRAN A. WaNet—imperceptible warping-based backdoor attack[J]. arXiv:2102.10369, 2021.
[11] LIU K, DOLAN-GAVITT B, GARG S. Fine-pruning: defending against backdooring attacks on deep neural networks[C]//Proceedings of the International Symposium on Research in Attacks, Intrusions, and Defenses, 2018: 273-294.
[12] LI Y, LYU X, KOREN N, et al. Neural attention distillation: erasing backdoor triggers from deep neural networks[J]. arXiv:2101.05930, 2021.
[13] LI Y, LYU X, MA X, et al. Reconstructive neuron pruning for backdoor defense[C]//Proceedings of the International Conference on Machine Learning, 2023: 19837-19854.
[14] KIRKPATRICK J, PASCANU R, RABINOWITZ N, et al. Overcoming catastrophic forgetting in neural networks[J]. Proceedings of the National Academy of Sciences, 2017, 114(13): 3521-3526.
[15] WANG B, YAO Y, SHAN S, et al. Neural cleanse: identifying and mitigating backdoor attacks in neural networks[C]//Proceedings of the 2019 IEEE Symposium on Security and Privacy, 2019: 707-723.
[16] LI Y, JIANG Y, LI Z, et al. Backdoor learning: a survey[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 35(1): 5-22.
[17] CHEN X, LIU C, LI B, et al. Targeted backdoor attacks on deep learning systems using data poisoning[J]. arXiv:1712. 05526, 2017.
[18] QI X, XIE T, LI Y, et al. Revisiting the assumption of latent separability for backdoor defenses[C]//Proceedings of the 11th International Conference on Learning Representations, 2022.
[19] LIU Y, MA S, AAFER Y, et al. Trojaning attack on neural networks[C]//Proceedings of the 25th Annual Network and Distributed System Security Symposium, 2018.
[20] NGUYEN T A, TRAN A. Input-aware dynamic backdoor attack[C]//Advances in Neural Information Processing Systems, 2020: 3454-3464.
[21] BARNI M, KALLAS K, TONDI B. A new backdoor attack in CNNs by training set corruption without label poisoning[C]//Proceedings of the 2019 IEEE International Conference on Image Processing, 2019: 101-105.
[22] CAI R, ZHANG Z, CHEN T, et al. Randomized channel shuffling: minimal-overhead backdoor attack detection without clean datasets[C]//Advances in Neural Information Processing Systems, 2022: 33876-33889.
[23] ZHENG R, TANG R, LI J, et al. Data-free backdoor removal based on channel Lipschitzness[C]//Proceedings of the European Conference on Computer Vision, 2022: 175-191.
[24] ZENG Y, CHEN S, PARK W, et al. Adversarial unlearning of backdoors via implicit hypergradient[C]//Proceedings of the International Conference on Learning Representations, 2021.
[25] ZENG Y, PARK W, MAO Z M, et al. Rethinking the backdoor attacks’ triggers: a frequency perspective[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 16473-16481.
[26] LI Y, ZHAI T, JIANG Y, et al. Backdoor attack in the physical world[J]. arXiv:2104.02361, 2021.
[27] MCCLOSKEY M, COHEN N J. Catastrophic interference in connectionist networks: the sequential learning problem[M]. Pittsburgh: Academic Press, 1989: 109-165. |