[1] KONEČNÝ J, MCMAHAN H B, RAMAGE D, et al. Federated optimization: distributed machine learning for on-device intelligence[J]. arXiv:1610.02527, 2016.
[2] KONEČNÝ J, MCMAHAN H B, YU F X, et al. Federated learning: strategies for improving communication efficiency[J]. arXiv:1610.05492, 2016.
[3] BRENDAN MCMAHAN H, EIDER M, RAMAGE D, et al. Federated learning of deep networks using model averaging[J]. arXiv:1602.05629, 2016.
[4] JOCHEMS A, DEIST T M, EL NAQA I, et al. Developing and validating a survival prediction model for NSCLC patients through distributed learning across 3 countries[J]. International Journal of Radiation Oncology·Biology·Physics, 2017, 99(2): 344-352.
[5] ZHANG J, TAO D C. Empowering things with intelligence: a survey of the progress, challenges, and opportunities in artificial intelligence of things[J]. IEEE Internet of Things Journal, 2021, 8(10): 7789-7817.
[6] MELIS L, SONG C Z, DE CRISTOFARO E, et al. Exploiting unintended feature leakage in collaborative learning[J]. arXiv:1805.04049, 2018.
[7] ZHAO B, MOPURI K R, BILEN H. iDLG: improved deep leakage from gradients[[J]. arXiv:2001.02610, 2020.
[8] ZHU L, LIU Z, HAN S. Deep leakage from gradients[C]//Advances in Neural Information Processing Systems, 2019: 14774-1478.
[9] CAO D, CHANG S, LIN Z J, et al. Understanding distributed poisoning attack in federated learning[C]//Proceedings of the 2019 IEEE 25th International Conference on Parallel and Distributed Systems. Piscataway: IEEE, 2020: 233-239.
[10] HUANG A B. Dynamic backdoor attacks against federated learning[J]. arXiv:2011.07429, 2020.
[11] GU T Y, DOLAN-GAVITT B, GARG S. BadNets: identifying vulnerabilities in the machine learning model supply chain[J]. arXiv:1708.06733, 2017.
[12] DWORK C, ROTH A. The algorithmic foundations of differential privacy[J]. Foundations and Trends? in Theoretical Computer Science, 2014, 9(3/4): 211-407.
[13] ABADI M, CHU A, GOODFELLOW I, et al. Deep learning with differential privacy[C]//Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Sec-urity. New York: ACM, 2016: 308-318.
[14] GEYER R C, KLEIN T, NABI M. Differentially private federated learning: a client level perspective[J]. arXiv:1712. 07557, 2017.
[15] SHAMIR A. How to share a secret[J]. Communications of the ACM, 1979, 22(11): 612-613.
[16] SHAYAN M, FUNG C, YOON C J M, et al. Biscotti: a blockchain system for private and secure federated learning[J]. IEEE Transactions on Parallel and Distributed Systems, 2021, 32(7): 1513-1525.
[17] KHAZBAK Y, TAN T X, CAO G H. MLGuard: mitigating poisoning attacks in privacy preserving distributed collaborative learning[C]//Proceedings of the 2020 29th International Conference on Computer Communications and Networks. Piscataway: IEEE, 2020: 1-9.
[18] LI Q X, CHRISTENSEN M G. A privacy-preserving asynchronous averaging algorithm based on Shamir’s secret sharing[C]//Proceedings of the 2019 27th European Signal Processing Conference. Piscataway: IEEE, 2019: 1-5.
[19] SUN X Q, ZHANG P, LIU J K, et al. Private machine lea-rning classification based on fully homomorphic encryption[J]. IEEE Transactions on Emerging Topics in Computing, 2020, 8(2): 352-364.
[20] GUO J L, WU J, LIU A F, et al. LightFed: an efficient and secure federated edge learning system on model splitting[J]. IEEE Transactions on Parallel and Distributed Systems, 2022, 33(11): 2701-2713.
[21] ZENG P J, LIU A F, XIONG N N, et al. TD-MDB: a truth-discovery-based multidimensional bidding strategy for federated learning in industrial IoT systems[J]. IEEE Internet of Things Journal, 2024, 11(3): 4274-4288.
[22] BLANCHARD P, EL MHAMDI E M, GUERRAOUI R, et al. Machine learning with adversaries: Byzantine tolerant gradient descent[C]//Proceedings of the Neural Information Processing Systems, 2017: 118-128.
[23] MHAMDI E M E, RACHID G, SéBASTIEN R. The hidden vulnerability of distributed learning in Byzantium[C]//Proceedings of the 35th International Conference on Machine Learning, 2018: 3521-3530.
[24] YIN D, CHEN Y D, RAMCHANDRAN K, et al. Byzantine-robust distributed learning: towards optimal statistical rates[C]//Proceedings of the International Conference on Machine Learning, 2018: 5650-5659.
[25] LI X Y, QU Z, ZHAO S Q, et al. LoMar: a local defense against poisoning attack on federated learning[J]. IEEE Transactions on Dependable and Secure Computing, 2023, 20(1): 437-450.
[26] LIU X Y, LI H W, XU G W, et al. Privacy-enhanced federated learning against poisoning adversaries[J]. IEEE Transactions on Information Forensics and Security, 2021, 16: 4574-4588.
[27] MCMAHAN H B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data[C]//Proceedings of the 20th International Conference on Artificial Intelligence and Statistics,?2017: 1273-1282.
[28] LYUBASHEVSKY V, PEIKERT C, REGEV O. On ideal lattices and learning with errors over rings[J]. Journal of the ACM, 2013, 60(6): 1-35.
[29] GENTRY C. Fully homomorphic encryption using ideal lattices[C]//Proceedings of the 41st Annual ACM Symposium on Theory of Computing. New York: ACM, 2009: 169-178.
[30] BRAKERSKI Z, VAIKUNTANATHAN V. Fully homomorphic encryption from ring-LWE and security for key dependent messages[C]//Advances in Cryptology 31st Annual Cryptology Conference. Cham: Springer, 2011: 505-524.
[31] CHEON J H, KIM A, KIM M, et al. Homomorphic encryption for arithmetic of approximate numbers[C]//Proceedings of the 23rd International Conference on the Theory and Applications of Cryptology and Information Security. Cham: Springer, 2017: 409-437.
[32] LI B Y, MICCIANCIO D. On the security of homomorphic encryption on approximate numbers[C]//Proceedings of the 40th Annual International Conference on the Theory and Applications of Cryptographic Techniques. Cham: Springer, 2021: 648-677.
[33] HE K M, ZHANG X, REN S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016: 770-778.
[34] Microsoft. SEAL[CP/OL]. (2022-01-26)[2024-05-10]. https://github.com/Microsoft/SEAL. |