计算机工程与应用 ›› 2025, Vol. 61 ›› Issue (7): 176-187.DOI: 10.3778/j.issn.1002-8331.2405-0333

• 理论与研发 • 上一篇    下一篇

异质环境下原型联邦学习模型距离校正与聚合算法

王鑫,丁雪爽   

  1. 陕西科技大学 电子信息与人工智能学院,西安 710021
  • 出版日期:2025-04-01 发布日期:2025-04-01

Model Distance Correction and Aggregation Algorithm for Prototype Federated Learning in Heterogeneous Environments

WANG Xin, DING Xueshuang   

  1. College of Electronic Information and Artificial Intelligence, Shaanxi University of Science & Technology, Xi’an 710021, China
  • Online:2025-04-01 Published:2025-04-01

摘要: 针对联邦学习中因客户端数据集非独立同分布及设备算力参差不齐所导致的模型偏差大、收敛不稳定及泛化性差等问题,提出了一种基于原型联邦学习模型距离校正与聚合算法(FedMPD)。FedMPD在客户端本地构建嵌入网络提取异质数据特征,并通过设置局部与全局原型的修正项来校正客户端模型。此外,算法引入了原型距离约束条件,允许客户端根据局部原型与全局原型的距离阈值自适应调整训练周期,以缓解设备异质性的影响。在模型聚合阶段,FedMPD采用了一种加权聚合策略,该策略综合考虑客户端的数据量和局部原型质量,以更准确地量化不同客户端对全局模型的贡献度。实验结果表明,FedMPD在模型收敛稳定性、测试损失降低以及测试精度提升等方面均显著优于传统联邦学习算法,为异质环境下联邦学习提供了一种稳定、高效且逻辑严谨的方法。

关键词: 联邦学习, 原型学习, 对比损失, 度量学习, 异质性数据处理

Abstract: To address issues such as significant model bias, unstable convergence, and poor generalization in federated learning due to non-independent and identically distributed (non-IID) client datasets and heterogeneous computing capabilities, this paper introduces a prototype-based model distance correction and aggregation algorithm for federated learning (FedMPD). FedMPD constructs local embedding networks on clients to extract features from heterogeneous data and corrects client models by introducing adjustments relative to both local and global prototypes. The algorithm incorporates prototype distance constraints, enabling clients to adaptively adjust training epochs based on the distance threshold between local and global prototypes, thereby mitigating the impact of device heterogeneity. During the model aggregation phase, FedMPD employs a weighted aggregation strategy that considers both the quantity of client data and the quality of local prototypes to more accurately quantify each client’s contribution to the global model. Experimental results demonstrate that FedMPD outperforms traditional federated learning algorithms in terms of model convergence stability, reduction in test loss, and improvement in test accuracy, providing a stable, efficient, and logically rigorous method for federated learning in heterogeneous environments.

Key words: federated learning, prototype learning, contrast loss, metric learning, heterogeneous data processing