Computer Engineering and Applications ›› 2025, Vol. 61 ›› Issue (24): 302-312.DOI: 10.3778/j.issn.1002-8331.2508-0316

• Network, Communication and Security • Previous Articles     Next Articles

Secure Incentive Mechanism Scheme for Federated Learning Based on Adaptive Gradient Clipping

CAO Yang1,2,3+, GUAN Guilin2,3, ZHI Ting2,3, CAI Huimin2,3   

  1. 1.School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
    2.CETC Big Data Research Institute Co., Ltd., Guiyang 550022, China
    3.National Engineering Research Center of Big Data Application on Improving Government Governance Capacities, Guiyang 550022, China
  • Online:2025-12-15 Published:2025-12-15

自适应梯度裁剪的联邦学习安全激励机制方案

曹扬1,2,3+,管桂林2,3,支婷2,3,蔡惠民2,3   

  1. 1.电子科技大学 计算机科学与工程学院,成都 611731 
    2.中电科大数据研究院有限公司,贵阳 550022
    3.提升政府治理能力大数据国家工程研究中心,贵阳 550022

Abstract: Federated learning faces two core challenges: the lack of privacy protection for model parameters uploaded by clients, and the issue of incentive fairness caused by a single dimension of data contribution evaluation. Existing privacy-preserving techniques, such as homomorphic encryption, suffer from high computational overhead, while differential privacy often leads to degraded model convergence speed and accuracy, making them ill-suited for real-time applications requiring low latency and high efficiency. Meanwhile, conventional client contribution evaluation methods typically overlook data quality diversity, causing the global model to favor clients with larger data volumes and discouraging the contribution of high-quality data. To address these issues, this paper proposes a novel incentive mechanism for federated learning based on adaptive gradient clipping with differential privacy and multi-dimensional contribution assessment. Gaussian noise is injected into client gradients under a differential privacy framework to protect sensitive information. The gradient clipping threshold is dynamically adjusted based on current gradient values and a publicly available reference dataset, enabling fine-grained control over the noise scale and thereby enhancing the utility of the perturbed gradients. A multi-dimensional contribution evaluation method is introduced to assess client contributions by considering data quantity, data diversity, and marginal model improvement. Adaptive weight adjustment ensures fair and accurate reward allocation, incentivizing clients to contribute high-quality data and improving both global model performance and data utility. Experimental results demonstrate that, compared to traditional federated learning approaches, the proposed method achieves higher training efficiency and model accuracy while preserving strong data privacy guarantees.

Key words: federated learning, incentive mechanism, adaptive gradient clipping, differential privacy, multi-dimensional data contribution evaluation, privacy protection, secure aggregation

摘要: 联邦学习面临两大核心挑战:客户端上传模型参数缺乏隐私保护,以及数据贡献评估维度单一引发的激励公平性问题。现有隐私保护技术如同态加密存在计算开销大、差分隐私导致模型收敛速度下降及精度损失,难以满足实时场景对低延迟与高效能要求;同时,传统参与方贡献评估方法因忽视数据质量多样性,导致模型偏向数据量大的客户端,阻碍了客户端提供高质量数据。针对上述问题,提出一种基于自适应梯度裁剪的差分隐私与多维度贡献评估的联邦学习激励机制,对客户端模型参数通过差分隐私机制向梯度注入噪声,并根据当前的梯度值及公开训练样本集迭代更新梯度裁剪阈值,灵活地控制注入噪声的规模,提升生成数据的可用性;提出基于多维度数据贡献评估方法评估客户端贡献程度,并动态调整权重,进而激励客户端积极贡献高质量数据,提高全局模型的性能和数据可用性。实验结果表明,与传统的联邦学习方案相比,该方案在保证数据隐私的前提下,能够有效提升模型的训练效率和准确性。

关键词: 联邦学习, 激励机制, 自适应梯度裁剪, 差分隐私, 多维度数据贡献评估, 隐私保护, 安全聚合