Computer Engineering and Applications ›› 2008, Vol. 44 ›› Issue (36): 230-233.DOI: 10.3778/j.issn.1002-8331.2008.36.067

• 工程与应用 • Previous Articles     Next Articles

Multi-agents reinforcement learning for symmetrical coordination

WANG Yun,HAN Wei   

  1. Information and Engineering College,Nanjing University of Finance and Economics,Nanjing 210046,China
  • Received:2008-06-17 Revised:2008-10-21 Online:2008-12-21 Published:2008-12-21
  • Contact: WANG Yun

对称协调博弈问题的多智能体强化学习

王 云,韩 伟   

  1. 南京财经大学 信息工程学院,南京 210046
  • 通讯作者: 王 云

Abstract: Considering the problem of robots coordination games,the paper puts forward an agents’ belief revision model and a learning algorithm Position-Exchanging Learning(PEL) which is based on the similarity of agents’ strategies in coordination games.By position-exchanging,each agent stands from the viewpoint of its opponent and infers opponents’ actions.The belief revision model combines the objective observed actions and subjective inferred actions.Coordination is assured by adjusting the belief degree to be 0 or 1.The algorithm PEL is tested in simulations that robots coordinate to avoid collision,and the results prove it performs better than present methods.

Key words: Multi-Agents System(MAS), reinforcement learning, coordination games

摘要: 针对多机器人协调问题,利用协调博弈中智能体策略相似性,提出智能体的高阶信念修正模型和学习方法PEL,使智能体站在对手角度进行换位推理,进而根据信念修正将客观观察行为和主观信念推理结合起来。证明了信念修正模型的推理置信度只在0和1两个值上调整即可协调成功。以多机器人避碰为实验背景进行仿真,表明算法比现有方法能够取得更好的协调性能。

关键词: 多智能体系统, 强化学习, 协调博弈