Computer Engineering and Applications ›› 2011, Vol. 47 ›› Issue (23): 212-216.

• 工程与应用 • Previous Articles     Next Articles

RoboCup-2D passing strategy based on joint reinforcement learning

CHANG Xiaojun   

  1. Faculty of Automation and Information Engineering,Xi’an University of Technology,Xi’an 710048,China
  • Received:1900-01-01 Revised:1900-01-01 Online:2011-08-11 Published:2011-08-11



  1. 西安理工大学 自动化与信息工程学院,西安 710048

Abstract: A combined Q-learning algorithm of Multi-Agent System(MAS) is proposed on the basis of the traditional Q-learning algorithm.Multi-agent learning is performed under the same evaluation function.While learning results of all the agents which participate in collaboration are taken into account during the learning process.The pitch components of state are reduced by introducing a state of decomposition method in RoboCup-2D soccer simulation game.The optimal state obtained by joint learning is adopted as the optimal action group of collaborative multi-agent.The problems of passing strategy and cooperation between all agents in the simulation are effective solved.The results of simulation and experiments demonstrate the validity and reliability of the proposed algorithm.

Key words: multi-agent system, joint Q-learning algorithm, RoboCup-2D, state of decomposition stadium in football field

摘要: 在传统Q学习算法基础上引入多智能体系统,提出了多智能体联合Q学习算法。该算法是在同一评价函数下进行多智能体的学习,并且学习过程考虑了参与协作的所有智能体的学习结果。在RoboCup-2D足球仿真比赛中通过引入球场状态分解法减少了状态分量,采用联合学习得到的最优状态作为多智能体协作的最优动作组,有效解决了仿真中各智能体之间的传球策略及其协作问题,仿真和实验结果证明了算法的有效性和可靠性。

关键词: 多智能体系统, 联合Q学习算法, RoboCup-2D, 球场状态分解法