Guiding exploration by pre-existing knowledge without modifying reward

  • Authors:
  • Kary Främling

  • Affiliations:
  • Helsinki University of Technology, P.O. Box 5500, FIN-02015 HUT, Finland

  • Venue:
  • Neural Networks
  • Year:
  • 2007

Quantified Score

Hi-index 0.01

Visualization

Abstract

Reinforcement learning is based on exploration of the environment and receiving reward that indicates which actions taken by the agent are good and which ones are bad. In many applications receiving even the first reward may require long exploration, during which the agent has no information about its progress. This paper presents an approach that makes it possible to use pre-existing knowledge about the task for guiding exploration through the state space. Concepts of short- and long-term memory combine guidance by pre-existing knowledge with reinforcement learning methods for value function estimation in order to make learning faster while allowing the agent to converge towards a good policy.