Tree based discretization for continuous state space reinforcement learning

  • Authors:
  • William T. B. Uther;Manuela M. Veloso

  • Affiliations:
  • -;-

  • Venue:
  • AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

Reinforcement learning is an effective technique for learning action policies in discrete stochastic environments, but its efficiency can decay exponentially with the size of the state space. In many situations significant portions of a large state space may be irrelevant to a specific goal and can be aggregated into a few, relevant, states. The U Tree algorithm generates a tree based state discretization that efficiently finds the relevant state chunks of large propositional domains. In this paper, we extend the U Tree algorithm to challenging domains with a continuous state space for which there is no initial discretization. This Continuous U Tree algorithm transfers traditional regression tree techniques to reinforcement learning. We have performed experiments in a variety of domains that show that Continuous U Tree effectively handles large continuous state spaces. In this paper, we report on results in two domains, one gives a clear visualization of the algorithm and another empirically demonstrates an effective state discretization in a simple multi-agent environment.