Adaptive function approximation in reinforcement learning with an interpolating growing neural gas

  • Authors:
  • Michael Baumann;Hans Kleine Büning

  • Affiliations:
  • Department of Computer Science, University of Paderborn, Paderborn, Germany;Department of Computer Science, University of Paderborn, Paderborn, Germany

  • Venue:
  • International Journal of Hybrid Intelligent Systems
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

Q-Learning is a widely used method for dealing with reinforcement learning problems. To speed up learning and to exploit gained experience more efficiently it is highly beneficial to add generalization to Q-Learning and thus enabling the transfer of experience to unseen but similar states. In this paper, we report on improvements for GNG-Q, a combination of Q-Learning and growing neural gas GNG. It solves reinforcement learning problems with continuous state spaces and simultaneously learns a proper approximation of the state space by starting with a coarse resolution that is gradually refined based on information achieved during learning. We introduce the Interpolating GNG-Q IGNG-Q that uses distance-based interpolation between learned Q-vectors, adjust the update rule, suggest a new refinement strategy and propose a new criterion to decide when a refinement is necessary. Furthermore, we argue that this criterion offers an implicit local stopping condition for changes made to the approximation. Additionally, we employ eligibility traces to speed up learning. The improved method is evaluated in continuous state spaces and the results are compared with several approaches from literature. Our experiments confirm that the modifications highly improve the efficiency of the approximation and that IGNG-Q is well competitive with existing methods.