Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Kernel-Based Reinforcement Learning
Machine Learning
Technical Update: Least-Squares Temporal Difference Learning
Machine Learning
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Least-squares policy iteration
The Journal of Machine Learning Research
Efficient reinforcement learning using recursive least-squares methods
Journal of Artificial Intelligence Research
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
The kernel recursive least-squares algorithm
IEEE Transactions on Signal Processing
A Kernel-Based Reinforcement Learning Approach to Dynamic Behavior Modeling of Intrusion Detection
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks
Sparse Kernel-SARSA(λ) with an eligibility trace
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part III
Construction of approximation spaces for reinforcement learning
The Journal of Machine Learning Research
Hi-index | 0.00 |
Reinforcement learning algorithms with function approximation have attracted many research interests since most real-world problems have large or continuous state spaces. To improve the generalization ability of function approximation, kernel-based reinforcement learning becomes one of the most promising methods in recent years. But one main difficulty in kernel methods is the computational and storage costs of kernel matrix whose dimension is equal to the number of data samples. In this paper, a novel sparse kernel-based least-squares temporal-difference (TD) algorithm for reinforcement learning is presented, where a kernel sparsification procedure using approximately linear dependent (ALD) analysis is used to reduce the kernel matrix dimension efficiently. The solution of the kernel-based LS-TD(λ) learning algorithm is derived by a least-squares regression in the kernel-induced high-dimensional feature space and its sparsity is guaranteed by the ALD-based sparsification procedure. Compared to the previous linear TD(λ) methods, the proposed method not only has good performance in nonlinear approximation ability but also has sparse solutions with low computational costs. Experimental results on learning prediction of a Markov chain illustrate the effectiveness of the proposed method.