State similarity based approach for improving performance in RL

  • Authors:
  • Sertan Girgin;Faruk Polat;Reda Alhajj

  • Affiliations:
  • Middle East Technical University, Dept. of Computer Engineering and University of Calgary, Dept. of Computer Science;Middle East Technical University, Dept. of Computer Engineering;University of Calgary, Dept. of Computer Science and Global University, Dept. of Computer Science

  • Venue:
  • IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper employs state similarity to improve reinforcement learning performance. This is achieved by first identifying states with similar sub-policies. Then, a tree is constructed to be used for locating common action sequences of states as derived from possible optimal policies. Such sequences are utilized for defining a similarity function between states, which is essential for reflecting updates on the action-value function of a state onto all similar states. As a result, the experience acquired during learning can be applied to a broader context. Effectiveness of the method is demonstrated empirically.