Pingpong playing robot controlled byu a microcomputer
Microprocessors & Microsystems
A robot ping-pong player: experiment in real-time intelligent control
A robot ping-pong player: experiment in real-time intelligent control
A Kendama learning robot based on bi-directional theory
Neural Networks - 1996 Special issue: four major hypotheses in neuroscience
Scalable Techniques from Nonparametric Statistics for Real Time Robot Learning
Applied Intelligence
A Neurobiological Perspective on Humanoid Robot Design
IEEE Intelligent Systems
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Adaptive mixtures of local experts
Neural Computation
Neurocomputing
A survey of robot learning from demonstration
Robotics and Autonomous Systems
Robot Programming by Demonstration
Robot Programming by Demonstration
Task-specific generalization of discrete and periodic dynamic movement primitives
IEEE Transactions on Robotics
Journal of Real-Time Image Processing
On Learning, Representing, and Generalizing a Task in a Humanoid Robot
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Probabilistic movement modeling for intention inference in human-robot interaction
International Journal of Robotics Research
Probabilistic model-based imitation learning
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Reinforcement learning in robotics: A survey
International Journal of Robotics Research
Hi-index | 0.00 |
Learning new motor tasks from physical interactions is an important goal for both robotics and machine learning. However, when moving beyond basic skills, most monolithic machine learning approaches fail to scale. For more complex skills, methods that are tailored for the domain of skill learning are needed. In this paper, we take the task of learning table tennis as an example and present a new framework that allows a robot to learn cooperative table tennis from physical interaction with a human. The robot first learns a set of elementary table tennis hitting movements from a human table tennis teacher by kinesthetic teach-in, which is compiled into a set of motor primitives represented by dynamical systems. The robot subsequently generalizes these movements to a wider range of situations using our mixture of motor primitives approach. The resulting policy enables the robot to select appropriate motor primitives as well as to generalize between them. Finally, the robot plays with a human table tennis partner and learns online to improve its behavior. We show that the resulting setup is capable of playing table tennis using an anthropomorphic robot arm.