Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
Neurocomputing
A resource-allocating network for function interpolation
Neural Computation
Technical Note: \cal Q-Learning
Machine Learning
Adaptive internal state space construction method for reinforcement learning of a real-world agent
Neural Networks - Special issue on organisation of computation in brain-like systems
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Neuro-Dynamic Programming
Q-Learning with Hidden-Unit Restarting
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Dynamic Programming
A fuzzy Actor-Critic reinforcement learning network
Information Sciences: an International Journal
Reinforcement distribution in fuzzy Q-learning
Fuzzy Sets and Systems
Adaptive function approximation in reinforcement learning with an interpolating growing neural gas
International Journal of Hybrid Intelligent Systems
Hi-index | 0.00 |
A method for function approximation in reinforcement learning settings is proposed. The action-value function of the Q-learning method is approximated by the radial basis function neural network and learned by the gradient descent. Those radial basis units that are unable to fit the local action-value function exactly enough are decomposed into new units with smaller widths. The local temporal-difference error is modelled by a two-class learning vector quantization algorithm, which approximates distributions of the positive and of the negative error and provides the centers of the new units. This method is especially convenient in cases of smooth value functions with large local variation in certain parts of the state space, such that non-uniform placement of basis functions is required. In comparison with four related methods, it has the smallest requirements of basis functions when achieving a comparable accuracy.