Propagation of Q-values in Tabular TD(lambda)

  • Authors:
  • Philippe Preux

  • Affiliations:
  • -

  • Venue:
  • ECML '02 Proceedings of the 13th European Conference on Machine Learning
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a new idea for tabular TD(驴) algorithm. In TD learning, rewards are propagated along the sequence of state/action pairs that have been visited recently. In complement to this, we propose to propagate rewards towards neighboring state/action pairs along this sequence, though unvisited. This leads to a great decrease in the number of iterations required for TD(驴) to be able to generalize since it is no longer necessary that a state/action pair is visited for its Q-value to be updated. The use of this propagation process makes tabular TD(驴) coming closer to neural net based TD(驴) with regards to its ability to generalize, while keeping unchanged other properties of tabular TD(驴).