计算机工程与应用 ›› 2025, Vol. 61 ›› Issue (21): 309-323.DOI: 10.3778/j.issn.1002-8331.2407-0515

• 网络、通信与安全 • 上一篇    下一篇

边缘计算下的可验证分层隐私保护联邦学习方案

张磊,叶前呈,纪莉莉   

  1. 1.佳木斯大学 信息电子技术学院,黑龙江 佳木斯 154007 
    2.佳木斯大学 信息电子技术学院 黑龙江省自主智能与信息处理重点实验室,黑龙江 佳木斯 154007 
    3.佳木斯市卫星导航技术与装备工程技术重点实验室,黑龙江 佳木斯 154007
    4.佳木斯大学 佳木斯大学科技处,黑龙江 佳木斯 154007
  • 出版日期:2025-11-01 发布日期:2025-10-31

Verifiable Hierarchical Privacy Protection Federated Learning Scheme for Edge Computing

ZHANG Lei, YE Qiancheng, JI Lili   

  1. 1.School of Information and Electronic Technology, Jiamusi University, Jiamusi, Heilongjiang 154007, China
    2.Heilongjiang Province Key Laboratory of Autonomous Intelligence and Information Processing, School of Information and Electronic Technology, Jiamusi University, Jiamusi, Heilongjiang 154007, China
    3.Jiamusi Key Laboratory of Satellite Navigation Technology and Equipment Engineering Technology, Jiamusi, Heilongjiang 154007, China
    4.Department of Science and Technology, Jiamusi University, Jiamusi, Heilongjiang 154007, China
  • Online:2025-11-01 Published:2025-10-31

摘要: 边缘计算下的联邦学习能够促进设备协作,提升学习效率,有效改善现有学习方式的不足,然而在联邦学习过程中,参与者的敏感信息可能通过本地模型泄露、恶意服务器可能反馈错误结果。针对边缘计算下联邦学习过程中存在的隐私泄露以及错误结果反馈问题,提出一种“云-边-端”协同的可验证分层隐私保护联邦学习方案(verifiable hierarchical privacy protection federated learning,VHPPFL)。在该方案中,终端设备层通过添加掩码来保护本地模型;边缘服务器层通过差分隐私和同态加密技术来保护聚合模型;利用分阶段验证聚合来保证聚合结果正确性并减少验证开销。在FashionMNIST数据集上的实验表明,该方案能够以较高效率实现对模型参数保护以及正确性验证。

关键词: 边缘计算, 联邦学习, 隐私保护, 正确性验证

Abstract: Federated learning under edge computing can promote device collaboration, improve learning efficiency, and effectively improve the shortcomings of existing learning methods. However, in the federated learning process, the sensitive information of participants may be leaked through the local model, and the aggregation server may feedback wrong results. Aiming at the problems of privacy leakage and wrong result feedback in federated learning process of edge computing, this paper proposes a verifiable hierarchical privacy protection federated learning, VHPPFL. In this scheme, the terminal device layer protects the local model by adding mask. The edge server layer protects the aggregation model by differential privacy and homomorphic encryption. Validating aggregation results in stages ensures correctness and reduces validation overhead. Experiments on FashionMNIST dataset show that this scheme can protect model parameters and verify model correctness with high efficiency.

Key words: edge computing, federal learning, privacy protection, correctness verification