Multiple paired forward and inverse models for motor control
Neural Networks - Special issue on neural control and robotics: biology and technology
Motion texture: a two-level statistical model for character motion synthesis
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Multiple model-based reinforcement learning
Neural Computation
Imitation in animals and artifacts
Segmenting motion capture data into distinct behaviors
GI '04 Proceedings of the 2004 Graphics Interface Conference
MOSAIC Model for Sensorimotor Learning and Control
Neural Computation
Variational Learning for Switching State-Space Models
Neural Computation
Simulating biped behaviors from human motion data
ACM SIGGRAPH 2007 papers
Gaussian Process Dynamical Models for Human Motion
IEEE Transactions on Pattern Analysis and Machine Intelligence
ACM SIGGRAPH 2009 papers
Contact-aware nonlinear control of dynamic characters
ACM SIGGRAPH 2009 papers
Human motion reconstruction from force sensors
SCA '11 Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Full-Body Compliant Human–Humanoid Interaction: Balancing in the Presence of Unknown External Forces
IEEE Transactions on Robotics
On Learning, Representing, and Generalizing a Task in a Humanoid Robot
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
In this paper, we propose an imitation learning framework to generate physically consistent behaviors by estimating the ground reaction force from captured human behaviors. In the proposed framework, we first extract behavioral primitives, which are represented by linear dynamical models, from captured human movements and measured ground reaction force by using the Gaussian mixture of linear dynamical models. Therefore, our method has small dependence on classification criteria defined by an experimenter. By switching primitives with different combinations while estimating the ground reaction force, different physically consistent behaviors can be generated. We apply the proposed method to a four-link robot model to generate squat motion sequences. The four-link robot model successfully generated the squat movements by using our imitation learning framework. To show generalization performance, we also apply the proposed method to robot models that have different torso weights and lengths from a human demonstrator and evaluate the control performances. In addition, we show that the robot model is able to recognize and imitate demonstrator movements even when the observed movements are deviated from the movements that are used to construct the primitives. For further evaluation in higher-dimensional state space, we apply the proposed method to a seven-link robot model. The seven-link robot model was able to generate squat-and-sway motions by using the proposed framework.