计算机工程与应用 ›› 2024, Vol. 60 ›› Issue (14): 294-305.DOI: 10.3778/j.issn.1002-8331.2304-0287

• 网络、通信与安全 • 上一篇    下一篇

横向联邦学习系统的安全聚合方法

黄秀丽,于鹏飞,高先周   

  1. 国网智能电网研究院有限公司 信息网络安全国网重点实验室,南京 210003
  • 出版日期:2024-07-15 发布日期:2024-07-15

Secure Aggregation Scheme for Horizontal Fedetated Learning System

HUANG Xiuli, YU Pengfei, GAO Xianzhou   

  1. State Grid Key Laboratory of Information, State Grid Smart Grid Research Institute Co., Ltd., Nanjing 210003, China
  • Online:2024-07-15 Published:2024-07-15

摘要: 针对隐私保护的横向联邦学习系统提出了一种模型安全聚合方案。在横向联邦学习系统利用同态加密进行隐私保护的情况下,服务器可以准确检测拜占庭节点发起的模型投毒攻击,避免异常本地模型参与全局模型的聚合。实验结果表明,所提方案可以保证在系统中存在拜占庭节点发动模型投毒攻击的情况下得到安全聚合的和高准确度的全局模型,并且不会为联邦学习系统带来过多的计算和通信开销。

关键词: 联邦学习, 安全聚合, 拜占庭攻击, 异常检测, 隐私保护

Abstract: A secure model aggregation scheme for privacy-preserving horizontal federated learning system is proposed in this paper. When homomorphic encryption is used in a horizontal federated learning system for privacy protection, the aggregation server can accurately detect Byzantine participants and realize the secure aggregation of the global model. The experimental results show that the proposed scheme can obtain a global model with high accuracy when there are Byzantine participants in the system, and will not bring too much computational and communication overhead to the federated learning system.

Key words: federated learning, secure aggregation, Byzantine attack, anomaly detection, privacy preserving