Computer Engineering and Applications ›› 2008, Vol. 44 ›› Issue (34): 175-178.DOI: 10.3778/j.issn.1002-8331.2008.34.054

• 图形、图像、模式识别 • Previous Articles     Next Articles

Relevance feedback in image retrieval algorithm based on reinforcement learning

SUN Hui-ping,GONG Sheng-rong,WANG Zhao-hui,LIU Quan   

  1. School of Computer Science & Technology,Soochow University,Suzhou,Jiangsu 215006,China
  • Received:2007-12-17 Revised:2008-02-29 Online:2008-12-01 Published:2008-12-01
  • Contact: SUN Hui-ping

基于强化学习的相关反馈图像检索算法

孙惠萍,龚声蓉,王朝晖,刘 全   

  1. 苏州大学 计算机科学与技术学院,江苏 苏州 215006
  • 通讯作者: 孙惠萍

Abstract: Relevance feedback algorithm has been an important compose in image retrieval,and it’s a hotspot in the research of image retrieval recently.In this paper,the Reinforcement Learning(RL) based relevance feedback algorithm is proposed.According to Q_learning function of RL,a matrix named Q is established,and for every image counterpart one item Qii=1,2,…,n),which is used to record the image’s total value of feedback in current retrieval.Each feedback,calculates new features depending on weighted features method,then,calculates every image’s current total value of feedback base on Q_learning function.The bigger of the value of Q the more relevant with example image.RL acquires the best path by keeping feed back to the environment,which is consistent the idea of relevance feedback acquires the best answer by grouping the retrieval intention of users.The experiment proves,the algorithmis is more superior.

摘要: 相关反馈算法是图像检索不可缺的重要组成部分,是近来图像检索中研究的一个热点。提出了基于强化学习的相关反馈算法。根据强化学习中的Q_学习函数,建立矩阵Q,对每幅图像建立对应的一项Qii=1,2,…,n),记录每幅图像的本次检索中的累计反馈值,并根据加权特征法计算新的特征,对于每幅反馈的图像根据Q_学习函数计算其当前的累计反馈值。Q值越大即越与例子图像相关。由于强化学习是通过不断对环境的反馈来获得最佳的路径,这与相关反馈通过对用户检索意图的摸索来获得最优答案的思想一致。实验表明,提出的相关反馈算法具有更大的优越性。