计算机工程与应用 ›› 2010, Vol. 46 ›› Issue (8): 52-55.DOI: 10.3778/j.issn.1002-8331.2010.08.015

• 研究、探讨 • 上一篇    下一篇

基于递推最小二乘法的多步时序差分学习算法

陈学松,杨宜民   

  1. 1.广东工业大学 应用数学学院,广州 510006
    2.广东工业大学 自动化学院,广州 510006
  • 收稿日期:2009-09-22 修回日期:2009-11-18 出版日期:2010-03-11 发布日期:2010-03-11
  • 通讯作者: 陈学松

Multi-step temporal difference learning algorithm based on recursive least-squares method

CHEN Xue-song,YANG Yi-min   

  1. 1.Faculty of Applied Mathematics,Guangdong University of Technology,Guangzhou 510006,China
    2.Faculty of Automation,Guangdong University of Technology,Guangzhou 510006,China
  • Received:2009-09-22 Revised:2009-11-18 Online:2010-03-11 Published:2010-03-11
  • Contact: CHEN Xue-song

摘要: 强化学习是一种重要的机器学习方法。为了提高强化学习过程的收敛速度和减少学习过程值函数估计的误差,提出了基于递推最小二乘法的多步时序差分学习算法(RLS-TD(λ))。证明了在满足一定条件下,该算法的权值将以概率1收敛到唯一解,并且得出和证明了值函数估计值的误差应满足的关系式。迷宫实验表明,与RLS-TD(0)算法相比,该算法能加快学习过程的收敛,与传统的TD(λ)算法相比,该算法减少了值函数估计误差,从而提高了精度。

Abstract: Reinforcement learning is one of most important machine learning methods.In order to solve the problem of slow convergence speed and the error of value function in reinforcement learning systems,a multi-step Temporal Difference(TD(λ)) learning algorithm using Recursive Least-Squares(RSL) methods(RLS-TD(λ)) is proposed.The proposed algorithm is based on RLS-TD(0),its convergence is proved,and its formula of error estimation is obtained.The experiment on maze problem demons-
trates that the algorithm can speed up the convergence of the learning process compared with RLS-TD(0),and improve the learning precision compared with TD(λ).

中图分类号: