SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
A high-level control mechanism for human locomotion based on parametric frame space interpolation
Proceedings of the Eurographics workshop on Computer animation and simulation '96
On-line locomotion generation based on motion blending
Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation
Interpolation Synthesis of Articulated Figure Motion
IEEE Computer Graphics and Applications
Verbs and Adverbs: Multidimensional Motion Interpolation
IEEE Computer Graphics and Applications
Flexible automatic motion blending with registration curves
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
Automated extraction and parameterization of motions in large data sets
ACM SIGGRAPH 2004 Papers
Geostatistical motion interpolation
ACM SIGGRAPH 2005 Papers
Style translation for human motion
ACM SIGGRAPH 2005 Papers
Active learning for real-time motion controllers
ACM SIGGRAPH 2007 papers
Synthesis and editing of personalized stylistic human motion
Proceedings of the 2010 ACM SIGGRAPH symposium on Interactive 3D Graphics and Games
Hi-index | 0.00 |
This paper presents an efficient model-based approach for automatic human motion registration, which builds temporal correspondences between structurally similar but distinctive motion examples. The key idea of the model-based registration process is to construct a parameterized motion model from a set of preregistered motion examples. With such a model, we can register an input motion with the parameterized motion model by continuously deforming the model to best match the input motion. We formulate the registration process in a gradient-based nonlinear optimization framework by minimizing an objective function that measures differences between the input motion and deforming motion. We also develop a multi-resolution optimization process to efficiently estimate the model parameters as well as the temporal correspondences between the input motion and deforming motion. We demonstrate the performance of our approach by testing the algorithm on difficult motion sequences and comparing with alternative approaches.