Using expectation-maximization for reinforcement learning
Neural Computation
Machine Learning - Special issue on inductive transfer
Automatic Discovery of Subgoals in Reinforcement Learning using Diverse Density
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Recent Advances in Hierarchical Reinforcement Learning
Discrete Event Dynamic Systems
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Reinforcement learning by reward-weighted regression for operational space control
Proceedings of the 24th international conference on Machine learning
Policy search for motor primitives in robotics
Machine Learning
Efficient gradient estimation for motor control learning
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Reduction of state space in reinforcement learning by sensor selection
Artificial Life and Robotics
Towards informative sensor-based grasp planning
Robotics and Autonomous Systems
Hi-index | 0.00 |
Many complex robot motor skills can be represented using elementary movements, and there exist efficient techniques for learning parametrized motor plans using demonstrations and self-improvement. However with current techniques, in many cases, the robot currently needs to learn a new elementary movement even if a parametrized motor plan exists that covers a related situation. A method is needed that modulates the elementary movement through the meta-parameters of its representation. In this paper, we describe how to learn such mappings from circumstances to meta-parameters using reinforcement learning. In particular we use a kernelized version of the reward-weighted regression. We show two robot applications of the presented setup in robotic domains; the generalization of throwing movements in darts, and of hitting movements in table tennis. We demonstrate that both tasks can be learned successfully using simulated and real robots.