Models for autonomously motivated exploration in reinforcement learning

  • Authors:
  • Peter Auer;Shiau Hong Lim;Chris Watkins

  • Affiliations:
  • Chair for Information Technology, Montanuniversität Leoben, Austria;Chair for Information Technology, Montanuniversität Leoben, Austria;Department of Computer Science, Royal Holloway University of London, UK

  • Venue:
  • DS'11 Proceedings of the 14th international conference on Discovery science
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

One of the striking differences between current reinforcement learning algorithms and early human learning is that animals and infants appear to explore their environments with autonomous purpose, in a manner appropriate to their current level of skills. An important intuition for autonomously motivated exploration was proposed by Schmidhuber [1,2]: an agent should be interested in making observations that reduce its uncertainty about future observations. However, there is not yet a theoretical analysis of the usefulness of autonomous exploration in respect to the overall performance of a learning agent. We discuss models for a learning agent's autonomous exploration and present some recent results. In particular, we investigate the exploration time for navigating effectively in a Markov Decsion Process (MDP) without rewards, and we consider extensions to MDPs with infinite state spaces.