[1] WORTSMAN M, ILHARCO G, GADRE S Y, et al. Model soups: averaging weights of multiple finetuned models improves accuracy without increasing inference time[C]//Proceedings of the International Conference on Machine Learning, 2022: 23965-23998.
[2] BAO H, DONG L, PIAO S, et al. BEiT: BERT pre-training of image transformers[J]. arXiv:2106.08254, 2021.
[3] TAN M, LE Q. EfficientNet: rethinking model scaling for convolutional neural networks[C]//Proceedings of the International Conference on Machine Learning, 2019: 6105-6114.
[4] BROWN T, MANN B, RYDER N, et al. Language models are few-shot learners[C]//Advances in Neural Information Processing Systems, 2020, 33: 1877-1901.
[5] MELIS G, KO?ISKY T, BLUNSOM P. Mogrifier LSTM[J]. arXiv:1909.01792, 2019.
[6] YAMADA I, ASAI A, SHINDO H, et al. LUKE: deep contextualized entity representations with entity-aware self-attention[J]. arXiv:2010.01057, 2020.
[7] KOLOBOV R, OKHAPKINA O, OMELCHISHINA O, et al. MediaSpeech: multilanguage ASR benchmark and dataset[J]. arXiv:2103.16193, 2021.
[8] PARK D S, ZHANG Y, JIA Y, et al. Improved noisy student training for automatic speech recognition[J]. arXiv:2005. 09629, 2020.
[9] XU Q T, BAEVSKI A, LIKHOMANENKO T, et al. Self-training and pre-training are complementary for speech recognition[C]//Proceedings of the 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021: 3030-3034.
[10] SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[J]. arXiv:1312.6199, 2013.
[11] GOODFELLOW I , SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples[J]. arXiv:1412.6572, 2014.
[12] KURAKIN A, GOODFELLOW I, BENGIO S. Adversarial examples in the physical world[J]. arXiv:1607.02533, 2016.
[13] HUANG S, PAPERNOT N, GOODFELLOW I, et al. Adversarial attacks on neural network policies[J]. arXiv:1702.02284, 2017.
[14] GU T Y, DOLAN-GAVITT B, GARG S. BadNets: identifying vulnerabilities in the machine learning model supply chain[J]. arXiv:1708.06733, 2017.
[15] WENG C H, LEE Y T, WU S H. On the trade-off between adversarial and backdoor robustness[C]//Advances in Neural Information Processing Systems, 2020, 33: 11973-11983.
[16] LI Y M, JIANG Y, LI Z F, et al. Backdoor learning: a survey[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35(1): 5-22.
[17] LI Y G, LYU X X, KOREN N, et al. Anti-backdoor learning: training clean models on poisoned data[C]//Advances in Neural Information Processing Systems, 2021, 34: 14900-14912.
[18] TRAN B, LI J, MADRY A. Spectral signatures in backdoor attacks[C]//Advances in Neural Information Processing Systems, 2018, 31.
[19] CHEN X Y, LIU C, LI B, et al. Targeted backdoor attacks on deep learning systems using data poisoning[J]. arXiv:1712.05526, 2017.
[20] LIU Y F, MA X J, BAILEY J, et al. Reflection backdoor: a natural backdoor attack on deep neural networks[C]//Proceedings of the 16th European Conference on Computer Vision, Glasgow, UK, Aug 23-28, 2020. Cham: Springer International Publishing, 2020: 182-199.
[21] LIAO C, ZHONG H T, SQUICCIARINI A, et al. Backdoor embedding in convolutional neural network models via invisible perturbation[J]. arXiv:1808.10307, 2018.
[22] LI S F, XUE M H, ZHAO B Z H, et al. Invisible backdoor attacks on deep neural networks via steganography and regularization[J]. IEEE Transactions on Dependable and Secure Computing, 2020, 18(5): 2088-2105.
[23] CHEN J Y, ZHENG H B, SU M M, et al. Invisible poisoning: highly stealthy targeted poisoning attack[C]//Proceedings of the 15th International Conference on Information Security and Cryptology, Nanjing, China, Dec 6-8, 2019. Cham:Springer International Publishing, 2020: 173-198.
[24] SAHA A, SUBRAMANYA A, PIRSIAVASH H. Hidden trigger backdoor attacks[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2020: 11957-11965.
[25] ZHAO S H, MA X J, ZHENG X, et al. Clean-label backdoor attacks on video recognition models[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 14443-14452.
[26] NGUYEN T A, TRAN A. Input-aware dynamic backdoor attack[C]//Advances in Neural Information Processing Systems, 2020, 33: 3454-3464.
[27] LI Y Z, LI Y M, WU B Y, et al. Invisible backdoor attack with sample-specific triggers[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 16463-16472.
[28] SHAFAHI A, HUANG W R, NAJIBI M, et al. Poison frogs! Targeted clean-label poisoning attacks on neural networks[C]//Advances in Neural Information Processing Systems, 2018, 31.
[29] TIAN G, JIANG W, LIU W, et al. Poisoning MorphNet for clean-label backdoor attack to point clouds[J]. arXiv:2105.04839, 2021.
[30] ZHU C, HUANG W R, LI H D, et al. Transferable clean-label poisoning attacks on deep neural nets[C]//Proceedings of the International Conference on Machine Learning, 2019: 7614-7623.
[31] LIU Y, MA S, AAFER Y, et al. Trojaning attack on neural networks[C]//Proceedings of the 25th Annual Network and Distributed System Security Symposium (NDSS 2018), 2018.
[32] ZHAO S H, MA X J, WANG Y S, et al. What do deep nets learn? class-wise patterns revealed in the input space[J]. arXiv:2101.06898, 2021.
[33] CHEN B, CARVALHO W, BARACALDO N, et al. Detecting backdoor attacks on deep neural networks by activation clustering[J]. arXiv:1811.03728, 2018.
[34] GAO Y S, XU C, WANG D, et al. Strip: a defence against trojan attacks on deep neural networks[C]//Proceedings of the 35th Annual Computer Security Applications Conference, 2019: 113-125.
[35] XU X J, WANG Q, LI H C, et al. Detecting AI Trojans using meta neural analysis[C]//Proceedings of the 2021 IEEE Symposium on Security and Privacy (SP), 2021: 103-120.
[36] HAYASE J, KONG W H, SOMANI R, et al. SPECTRE: defending against backdoor attacks using robust statistics[C]//Proceedings of the International Conference on Machine Learning, 2021: 4129-4139.
[37] TANG D, WANG X F, TANG H X, et al. Demon in the variant: statistical analysis of DNNs for robust backdoor contamination detection[C]//Proceedings of the USENIX Security Symposium, 2021: 1541-1558.
[38] WANG B, YAO Y S, SHAN S, et al. Neural cleanse: identifying and mitigating backdoor attacks in neural networks[C]//Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP), 2019: 707-723.
[39] CHEN H L, FU C, ZHAO J S, et al. DeepInspect: a black-box trojan detection and mitigation framework for deep neural networks[C]//Proceedings of the IJCAI, 2019.
[40] ZHAO P, CHEN P Y, DAS P, et al. Bridging mode connectivity in loss landscapes and adversarial robustness[J]. arXiv:2005.00060, 2020.
[41] LI Y G, LYU X X, KOREN N, et al. Neural attention distillation: erasing backdoor triggers from deep neural networks[J]. arXiv:2101.05930, 2021.
[42] WU D X, WANG Y S. Adversarial neuron pruning purifies backdoored deep models[C]//Advances in Neural Information Processing Systems, 2021, 34: 16913-16925.
[43] BARNI M, KALLAS K, TONDI B. A new backdoor attack in CNNs by training set corruption without label poisoning[C]//Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), 2019: 101-105.
[44] ZENG Y, PAKR W, MAO Z M, et al. Rethinking the backdoor attacks’ triggers: a frequency perspective[C]//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway: IEEE Press, 2022:16453-16461.
[45] NGUYEN A, TRAN A. WaNet--imperceptible warping-based backdoor attack[J]. arXiv:2102.10369, 2021.
[46] ZAGORUYKO S, KOMODAKIS N. Wide residual networks[J]. arXiv:1605.07146, 2016. |