Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Imitation: a means to enhance learning of a synthetic protolanguage in autonomous robots
Imitation in animals and artifacts
Automated derivation of behavior vocabularies for autonomous humanoid motion
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Style-based inverse kinematics
ACM SIGGRAPH 2004 Papers
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 4 - Volume 04
Circular nodes in neural networks
Neural Computation
Imitation learning with generalized task descriptions
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Using eigenposes for lossless periodic human motion imitation
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Kinesthetic bootstrapping: teaching motor skills to humanoid robots through physical interaction
KI'09 Proceedings of the 32nd annual German conference on Advances in artificial intelligence
Proceedings of the 12th international conference on Information processing in sensor networks
Hi-index | 0.00 |
Programming a humanoid robot to walk is a challenging problem in robotics. Traditional approaches rely heavily on prior knowledge of the robot's physical parameters to devise sophisticated control algorithms for generating a stable gait. In this paper, we provide, to our knowledge, the first demonstration that a humanoid robot can learn to walk directly by imitating a human gait obtained from motion capture (mocap) data. Training using human motion capture is an intuitive and flexible approach to programming a robot but direct usage of mocap data usually results in dynamically unstable motion. Furthermore, optimization using mocap data in the humanoid full-body joint-space is typically intractable. We propose a new modelfree approach to tractable imitation-based learning in humanoids. We represent kinematic information from human motion capture in a low dimensional subspace and map motor commands in this lowdimensional space to sensory feedback to learn a predictive dynamic model. This model is used within an optimization framework to estimate optimal motor commands that satisfy the initial kinematic constraints as best as possible while at the same time generating dynamically stable motion. We demonstrate the viability of our approach by providing examples of dynamically stable walking learned from mocap data using both a simulator and a real humanoid robot.