Technical Note: \cal Q-Learning
Machine Learning
TD(λ) Converges with Probability 1
Machine Learning
Reinforcement learning with replacing eligibility traces
Machine Learning - Special issue on reinforcement learning
A fast, compact approximation of the exponential function
Neural Computation
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Kernel-Based Reinforcement Learning
Machine Learning
Variable Resolution Discretization in Optimal Control
Machine Learning
A two-dimensional interpolation function for irregularly-spaced data
ACM '68 Proceedings of the 1968 23rd ACM national conference
Adaptive Radial Basis Decomposition by Learning Vector Quantization
Neural Processing Letters
Interpolation-based Q-learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Evolutionary Function Approximation for Reinforcement Learning
The Journal of Machine Learning Research
Restricted gradient-descent algorithm for value-function approximation in reinforcement learning
Artificial Intelligence
Adaptive state space partitioning for reinforcement learning
Engineering Applications of Artificial Intelligence
Q-learning with linear function approximation
COLT'07 Proceedings of the 20th annual conference on Learning theory
Cross-Entropy Optimization of Control Policies With Adaptive Basis Functions
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Neural Networks
State Aggregation by Growing Neural Gas for Reinforcement Learning in Continuous State Spaces
ICMLA '11 Proceedings of the 2011 10th International Conference on Machine Learning and Applications and Workshops - Volume 01
Hi-index | 0.00 |
Q-Learning is a widely used method for dealing with reinforcement learning problems. To speed up learning and to exploit gained experience more efficiently it is highly beneficial to add generalization to Q-Learning and thus enabling the transfer of experience to unseen but similar states. In this paper, we report on improvements for GNG-Q, a combination of Q-Learning and growing neural gas GNG. It solves reinforcement learning problems with continuous state spaces and simultaneously learns a proper approximation of the state space by starting with a coarse resolution that is gradually refined based on information achieved during learning. We introduce the Interpolating GNG-Q IGNG-Q that uses distance-based interpolation between learned Q-vectors, adjust the update rule, suggest a new refinement strategy and propose a new criterion to decide when a refinement is necessary. Furthermore, we argue that this criterion offers an implicit local stopping condition for changes made to the approximation. Additionally, we employ eligibility traces to speed up learning. The improved method is evaluated in continuous state spaces and the results are compared with several approaches from literature. Our experiments confirm that the modifications highly improve the efficiency of the approximation and that IGNG-Q is well competitive with existing methods.