[1] 钱志鸿, 王义君. 物联网技术与应用研究[J]. 电子学报, 2012, 40 (5): 1023-1029.
QIAN Z H, WANG Y J. IoT technology and application[J]. Acta Electronica Sinica, 2012, 40 (5): 1023-1029.
[2] RABINIA S, DIDAR N, BROCANELLI M, et al. Algorithms for data sharing-aware task allocation in edge computing systems[J]. IEEE Transactions on Parallel and Distributed Systems, 2025, 36(1): 15-28.
[3] LI B, QI P, LIU B, et al. Trustworthy AI: from principles to practices[J]. ACM Computing Surveys, 2023, 55(9): 1-46.
[4] 郭思昀, 李雷孝, 杜金泽, 等. 基于区块链的联邦学习系统方案研究综述[J]. 计算机工程与应用, 2025, 61(15): 36-53.
GUO S Y, LI L X, DU J Z, et al. Survey of blockchain-based federated learning system schemes[J]. Computer Engineering and Applications, 2025, 61(15): 36-53.
[5] YANG Q, LIU Y, CHEN T J, et al. Federated machine learning: concept and applications[J]. ACM Transactions on Intelligent Systems and Technology, 2019, 10(2): 1-19.
[6] RIEKE N, HANCOX J, LI W Q, et al. The future of digital health with federated learning[J]. NPJ Digital Medicine, 2020, 3: 119.
[7] WANG Q Y, YIN H Z, CHEN T, et al. Fast-adapting and privacy-preserving federated recommender system[J]. The VLDB Journal, 2022, 31(5): 877-896.
[8] BLANCHARD P, EL MHAMDI E M, GUERRAOUI R, et al. Machine learning with adversaries: Byzantine tolerant gradient descent[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017: 119-129.
[9] EL MHAMDI E M, GUERRAOUI R, ROUAULT S. The hidden vulnerability of distributed learning in Byzantium[C]//Proceedings of the International Conference on Machine Learning, 2018.
[10] KANG J W, XIONG Z H, NIYATO D, et al. Incentive mechanism for reliable federated learning: a joint optimization approach to combining reputation and contract theory[J]. IEEE Internet of Things Journal, 2019, 6(6): 10700-10714.
[11] ZHU L, LIU Z, HAN S. Deep leakage from gradients[C]//Advances in Neural Information Processing Systems, 2019: 13101-13112.
[12] ZHAO B, MOPURI K R, BILEN H. iDLG: improved deep leakage from gradients[J]. arXiv:2001.02610, 2020.
[13] HAO M, LI H W, XU G W, et al. Towards efficient and privacy-preserving federated deep learning[C]//Proceedings of the 2019 IEEE International Conference on Communications. Piscataway: IEEE, 2019: 1-6.
[14] MCMAHAN H B, RAMAGE D, TALWAR K, et al. Learning differentially private recurrent language models[J]. arXiv:1710.06963, 2017.
[15] LYU L, CHEN C. A novel attribute reconstruction attack in federated learning[J]. arXiv:2108.06910, 2021.
[16] LI Y L, LAI J Z, ZHANG R, et al. Secure and efficient multi-key aggregation for federated learning[J]. Information Sciences, 2024, 654: 119830.
[17] GU Y H, BAI Y B, XU S B. CS-MIA: membership inference attack based on prediction confidence series in federated learning[J]. Journal of Information Security and Applications, 2022, 67: 103201.
[18] HU R, GUO Y X, GONG Y M. Federated learning with sparsified model perturbation: improving accuracy under client-level differential privacy[J]. IEEE Transactions on Mobile Computing, 2024, 23(8): 8242-8255.
[19] MCMAHAN H B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data[C]//Proceedings of the International Conference on Artificial Intelligence and Statistics, 2016.
[20] FUKAMI T, MURATA T, NIWA K, et al. DP-norm: differential privacy primal-dual algorithm for decentralized federated learning[J]. IEEE Transactions on Information Forensics and Security, 2024, 19: 5783-5797.
[21] LIU J X, LOU J, XIONG L, et al. Cross-silo federated learning with record-level personalized differential privacy[C]//Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2024: 303-317.
[22] LI Y, DU W, HAN L Q, et al. A communication-efficient, privacy-preserving federated learning algorithm based on two-stage gradient pruning and differentiated differential privacy[J]. Sensors, 2023, 23(23): 9305.
[23] VARUN M, FENG S Y, WANG H, et al. Towards accurate and stronger local differential privacy for federated learning with staircase randomized response[C]//Proceedings of the Fourteenth ACM Conference on Data and Application Security and Privacy. New York: ACM, 2024: 307-318.
[24] XU J T, ZHANG C, JIN L, et al. A trust-aware incentive mechanism for federated learning with heterogeneous clients in edge computing[J]. Journal of Cybersecurity and Privacy, 2025, 5(3): 37.
[25] LI B, LU J F, CAO S Q, et al. RATE: game-theoretic design of sustainable incentive mechanism for federated learning[J]. IEEE Internet of Things Journal, 2025, 12(1): 81-96.
[26] THI LE T H, TRAN N H, TUN Y K, et al. An incentive mechanism for federated learning in wireless cellular networks: an auction approach[J]. IEEE Transactions on Wireless Communications, 2021, 20(8): 4874-4887.
[27] DENG Y H, LYU F, REN J, et al. Improving federated lear-ning with quality-aware user incentive and auto-weighted model aggregation[J]. IEEE Transactions on Parallel and Distributed Systems, 2022, 33(12): 4515-4529.
[28] 王鑫, 李美庆, 王黎明, 等. 一种基于合同理论的可激励联邦学习模型[J]. 电子与信息学报, 2023, 45(3): 874-883.
WANG X, LI M Q, WANG L M, et al. An incentivized federated learning model based on contract theory[J]. Journal of Electronics & Information Technology, 2023, 45(3): 874-883.
[29] LI L, YU X, CAI X L, et al. Contract-theory-based incentive mechanism for federated learning in health CrowdSensing[J]. IEEE Internet of Things Journal, 2023, 10(5): 4475-4489.
[30] SUN Q H, LI X, ZHANG J Y, et al. ShapleyFL: robust federated learning based on shapley value[C]//Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York: ACM, 2023: 2096-2108. |