State Aggregation by Growing Neural Gas for Reinforcement Learning in Continuous State Spaces

  • Authors:
  • Michael Baumann;Hans Kleine Buning

  • Affiliations:
  • -;-

  • Venue:
  • ICMLA '11 Proceedings of the 2011 10th International Conference on Machine Learning and Applications and Workshops - Volume 01
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

One of the conditions for the convergence of Q-Learning is to visit each state-action pair infinitely (or at least sufficiently) often. This requirement raises problems for large or continuous state spaces. Particularly, in continuous state spaces a discretization sufficiently fine to cover all relevant information usually results in an extremely large state space. In order to speed up and improve learning it is highly beneficial to add generalization to Q-Learning and thus being able to exploit experiences gained earlier. To achieve this, we compute a state space abstraction with a combination of growing neural gas and Q-Learning. This abstraction respects similarity in the state and action space and is constructed based on information achieved from interaction with the environment during learning. We examine the proposed algorithm on a continuous-state reinforcement learning problem and show that the approximated state space and the generalization speed up learning.