Proceedings of the seventh international conference (1990) on Machine learning
Robot Learning From Demonstration
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Exploration and apprenticeship learning in reinforcement learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Reinforcement Learning in Continuous Time and Space
Neural Computation
Using inaccurate models in reinforcement learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Learning to Control in Operational Space
International Journal of Robotics Research
A minimum relative entropy principle for learning and acting
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
We provide a novel framework for very fast model-based reinforcement learning in continuous state and action spaces. The framework requires probabilistic models that explicitly characterize their levels of confidence. Within this framework, we use flexible, non-parametric models to describe the world based on previously collected experience. We demonstrate learning on the cart-pole problem in a setting where we provide very limited prior knowledge about the task. Learning progresses rapidly, and a good policy is found after only a hand-full of iterations.