Effectiveness of considering state similarity for reinforcement learning

  • Authors:
  • Sertan Girgin;Faruk Polat;Reda Alhajj

  • Affiliations:
  • Department of Computer Eng., Middle East Technical University, Ankara, Turkey;Department of Computer Eng., Middle East Technical University, Ankara, Turkey;Department of Computer Science, University of Calgary, Calgary, AB, Canada

  • Venue:
  • IDEAL'06 Proceedings of the 7th international conference on Intelligent Data Engineering and Automated Learning
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a novel approach that locates states with similar sub-policies, and incorporates them into the reinforcement learning framework for better learning performance. This is achieved by identifying common action sequences of states, which are derived from possible optimal policies and reflected into a tree structure. Based on the number of such sequences, we define a similarity function between two states, which helps to reflect updates on the action-value function of a state to all similar states. This way, experience acquired during learning can be applied to a broader context. The effectiveness of the method is demonstrated empirically.