Learning While Exploring: Bridging the Gaps in the Eligibility Traces

  • Authors:
  • Fredrik A. Dahl;Ole Martin Halck

  • Affiliations:
  • -;-

  • Venue:
  • EMCL '01 Proceedings of the 12th European Conference on Machine Learning
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

The reinforcement learning algorithm TD(λ) applied to Markov decision processes is known to need added exploration in many cases. With the usual implementations of exploration in TD-learning, the feedback signals are either distorted or discarded, so that the exploration hurts the algorithm's learning. The present article gives a modification of the TD-learning algorithm that allows exploration without cost to the accuracy or speed of learning. The idea is that when the learning agent performs an action it perceives as inferior, it is compensated for its loss, that is, it is given an additional reward equal to its estimated cost of making the exploring move. This modification is compatible with existing exploration strategies, and is seen to work well when applied to a simple grid-world problem, even when always exploring completely at random.