计算机工程与应用 ›› 2022, Vol. 58 ›› Issue (6): 177-182.DOI: 10.3778/j.issn.1002-8331.2010-0482

• 模式识别与人工智能 • 上一篇    下一篇

多智能体边缘计算任务卸载

赵庶旭,元琳,张占平   

  1. 兰州交通大学 电子与信息工程学院,兰州 730070
  • 出版日期:2022-03-15 发布日期:2022-03-15

Multi-agent Edge Computing Task Offloading

ZHAO Shuxu, YUAN Lin, ZHANG Zhanping   

  1. School of Electronics and Information Engineering, Lanzhou Jiaotong University, Lanzhou 730070, China
  • Online:2022-03-15 Published:2022-03-15

摘要: 边缘计算技术的发展为计算密集型业务提供了一种全新的选择,低能耗、低时延、实时处理等词语不断被提及,任务卸载引起了众多学者的注意。任务在本地执行还是卸载到服务器上执行,以及卸载到哪一台服务器上执行成为必须要解决的问题。在多智能体环境中提出一种新的目标函数,并构建数学模型;建立马尔可夫决策过程,定义动作、状态空间以及奖励函数,通过深度强化学习DRQN优化任务卸载策略。仿真实验结果表明,DRQN在能耗、花费和时延上的综合表现优于随机卸载、DQN等算法,证明了提出算法的有效性和实效性。

关键词: 边缘计算, 任务卸载, 深度强化学习

Abstract: The development of edge computing technology provides a new choice for computing-intensive services. Words such as low energy consumption, low latency, and real-time processing are constantly being mentioned, and task offloading has attracted the attention of many scholars. Whether the task is executed locally or offloaded to the server, and on which server the offloading is executed has become a problem that must be solved. A new objective function is proposed in a multi-agent environment, and a mathematical model is constructed. Subsequently, the Markov decision process is established, the action space, state space and reward function are defined, and the task offloading strategy is optimized through deep reinforcement learning DRQN. The simulation experiment results show that the comprehensive performance of DRQN in terms of energy consumption, cost and delay is better than random offloading, DQN and other algorithms, which proves the effectiveness of the algorithm.

Key words: edge computing, task offloading, deep reinforcement learning