Principal Warps: Thin-Plate Splines and the Decomposition of Deformations
IEEE Transactions on Pattern Analysis and Machine Intelligence
Performance-driven facial animation
SIGGRAPH '90 Proceedings of the 17th annual conference on Computer graphics and interactive techniques
Active shape models—their training and application
Computer Vision and Image Understanding
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
MPEG-4 Facial Animation: The Standard,Implementation and Applications
MPEG-4 Facial Animation: The Standard,Implementation and Applications
Analysis and Synthesis of Facial Image Sequences Using Physical and Anatomical Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Comprehensive Database for Facial Expression Analysis
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
An example-based approach for facial expression cloning
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
A new point matching algorithm for non-rigid registration
Computer Vision and Image Understanding - Special issue on nonrigid image registration
Deformation transfer for triangle meshes
ACM SIGGRAPH 2004 Papers
Geometry-Driven Photorealistic Facial Expression Synthesis
IEEE Transactions on Visualization and Computer Graphics
Interactive 3D facial expression posing through 2D portrait manipulation
GI '08 Proceedings of graphics interface 2008
Hi-index | 0.00 |
A novel performance driven facial shape animation method is presented for mapping the expressions from the source face to the target face automatically. Unlike the prior expression cloning approaches, the proposed method aims to animate a new target face with the help of real facial expression samples. The basic idea is to learn the shape deformation from samples for target face to generate corresponding expressions. The process consists of two main stages. First of all, source motion vectors are transferred by statistic face model to generate a reasonable expression on the target face. And then, local deformation constraints are proposed to refine the animation results. In the second part, the local deformation characters for each target organ are learned from the samples, which preserve the personality as well as the expression styles. Experimental results on different facial animation demonstrate the feasibility and effectiveness of the proposed method.