A morphable model for the synthesis of 3D faces
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Deformation transfer for triangle meshes
ACM SIGGRAPH 2004 Papers
ACM SIGGRAPH 2005 Papers
Simple and efficient compression of animation sequences
Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation
Automatic rigging and animation of 3D characters
ACM SIGGRAPH 2007 papers
Real-time enveloping with rotational regression
ACM SIGGRAPH 2007 papers
Gradient domain editing of deforming mesh sequences
ACM SIGGRAPH 2007 papers
Real-time data driven deformation using kernel canonical correlation analysis
ACM SIGGRAPH 2008 papers
ACM SIGGRAPH 2009 papers
Fast local and global similarity searches in large motion capture databases
Proceedings of the 2010 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Interactive region-based linear 3D face models
ACM SIGGRAPH 2011 papers
Realtime performance-based facial animation
ACM SIGGRAPH 2011 papers
Controllable hand deformation from sparse examples with rich details
SCA '11 Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
SmurVEbox: a smart multi-user real-time virtual environment for generating character animations
Proceedings of the Virtual Reality International Conference: Laval Virtual
Hi-index | 0.00 |
Creating geometrically detailed mesh animations is an involved and resource-intensive process in digital content creation. In this work we present a method to rapidly combine available sparse motion capture data with existing mesh sequences to produce a large variety of new animations. The key idea is to model shape changes correlated to the pose of the animated object via a part-based statistical shape model. We observe that compact linear models suffice for a segmentation into nearly rigid parts. The same segmentation further guides the parameterization of the pose which is learned in conjunction with the marker movement. Besides the inherent high geometric detail, further benefits of the presented method arise from its robustness against errors in segmentation and pose parameterization. Due to efficiency of both learning and synthesis phase, our model allows to interactively steer virtual avatars based on few markers extracted from video data or input devices like the Kinect sensor.