A Kendama learning robot based on bi-directional theory
Neural Networks - 1996 Special issue: four major hypotheses in neuroscience
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Apprenticeship learning via inverse reinforcement learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Model-free reinforcement learning as mixture learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Policy adaptation with tactile feedback
Proceedings of the 6th international conference on Human-robot interaction
Interactive imitation learning of object movement skills
Autonomous Robots
Tactile Guidance for Policy Adaptation
Foundations and Trends in Robotics
Co-evolutionary predictors for kinematic pose inference from RGBD images
Proceedings of the 14th annual conference on Genetic and evolutionary computation
Dynamical movement primitives: Learning attractor models for motor behaviors
Neural Computation
DCOB: Action space for reinforcement learning of high DoF robots
Autonomous Robots
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advances in computational intelligence - Volume Part I
Hi-index | 0.00 |
The acquisition and self-improvement of novel motor skills is among the most important problems in robotics. Motor primitives offer one of the most promising frameworks for the application of machine learning techniques in this context. Employing an improved form of the dynamic systems motor primitives originally introduced by Ijspeert et al. [2], we show how both discrete and rhythmic tasks can be learned using a concerted approach of both imitation and reinforcement learning. For doing so, we present both learning algorithms and representations targeted for the practical application in robotics. Furthermore, we show that it is possible to include a start-up phase in rhythmic primitives. We show that two new motor skills, i.e., Ball-in-a-Cup and Ball-Paddling, can be learned on a real Barrett WAM robot arm at a pace similar to human learning while achieving a significantly more reliable final performance.