NETLAB: algorithms for pattern recognition
NETLAB: algorithms for pattern recognition
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Sparse on-line Gaussian processes
Neural Computation
Efficient svm training using low-rank kernel representations
The Journal of Machine Learning Research
Kernel independent component analysis
The Journal of Machine Learning Research
Predictive low-rank decomposition for kernel methods
ICML '05 Proceedings of the 22nd international conference on Machine learning
Reinforcement learning with Gaussian processes
ICML '05 Proceedings of the 22nd international conference on Machine learning
Automatic basis function construction for approximate dynamic programming and reinforcement learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Analyzing feature generation for value-function approximation
Proceedings of the 24th international conference on Machine learning
The Journal of Machine Learning Research
Online kernel selection for Bayesian reinforcement learning
Proceedings of the 25th international conference on Machine learning
Dynamic Programming and Optimal Control, Vol. II
Dynamic Programming and Optimal Control, Vol. II
Gaussian process dynamic programming
Neurocomputing
Hi-index | 0.00 |
Feature selection in reinforcement learning (RL), i.e. choosing basis functions such that useful approximations of the unkown value function can be obtained, is one of the main challenges in scaling RL to real-world applications. Here we consider the Gaussian process based framework GPTD for approximate policy evaluation, and propose feature selection through marginal likelihood optimization of the associated hyperparameters. Our approach has two appealing benefits: (1) given just sample transitions, we can solve the policy evaluation problem fully automatically (without looking at the learning task, and, in theory, independent of the dimensionality of the state space), and (2) model selection allows us to consider more sophisticated kernels, which in turn enable us to identify relevant subspaces and eliminate irrelevant state variables such that we can achieve substantial computational savings and improved prediction performance.