Learning skills in reinforcement learning using relative novelty

  • Authors:
  • Özgür Şimşek;Andrew G. Barto

  • Affiliations:
  • Department of Computer Science, University of Massachusetts, Amherst, MA;Department of Computer Science, University of Massachusetts, Amherst, MA

  • Venue:
  • SARA'05 Proceedings of the 6th international conference on Abstraction, Reformulation and Approximation
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a method for automatically creating a set of useful temporally-extended actions, or skills, in reinforcement learning. Our method identifies states that allow the agent to transition to a different region of the state space—for example, a doorway between two rooms—and generates temporally-extended actions that efficiently take the agent to these states. In identifying such states we use the concept of relative novelty, a measure of how much short-term novelty a state introduces to the agent. The resulting algorithm is simple, has low computational complexity, and is shown to improve performance in a number of problems.