Performance-driven facial animation
SIGGRAPH '90 Proceedings of the 17th annual conference on Computer graphics and interactive techniques
Computer facial animation
A morphable model for the synthesis of 3D faces
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Spacetime faces: high resolution capture for modeling and animation
ACM SIGGRAPH 2004 Papers
Semantic 3D motion retargeting for facial animation
APGV '06 Proceedings of the 3rd symposium on Applied perception in graphics and visualization
Analysis of human faces using a measurement-based skin reflectance model
ACM SIGGRAPH 2006 Papers
Facial performance capture and expressive translation for King Kong
ACM SIGGRAPH 2006 Sketches
Psychophysical investigation of facial expressions using computer animated faces
Proceedings of the 4th symposium on Applied perception in graphics and visualization
Multi-scale capture of facial geometry and motion
ACM SIGGRAPH 2007 papers
Evaluating the perceptual realism of animated facial expressions
ACM Transactions on Applied Perception (TAP)
Facial performance synthesis using deformation-driven polynomial displacement maps
ACM SIGGRAPH Asia 2008 papers
Fast and Effective Feature-Preserving Mesh Denoising
IEEE Transactions on Visualization and Computer Graphics
A phase-based approach to the estimation of the optical flow field using spatial filtering
IEEE Transactions on Neural Networks
Technical Section: Perception-driven facial expression synthesis
Computers and Graphics
Hi-index | 0.00 |
In this paper we present the first Facial Action Coding System (FACS) valid model to be based on dynamic 3D scans of human faces for use in graphics and psychological research. The model consists of FACS Action Unit (AU) based parameters and has been independently validated by FACS experts. Using this model, we explore the perceptual differences between linear facial motions -- represented by a linear blend shape approach -- and real facial motions that have been synthesized through the 3D facial model. Through numerical measures and visualizations, we show that this latter type of motion is geometrically nonlinear in terms of its vertices. In experiments, we explore the perceptual benefits of nonlinear motion for different AUs. Our results are insightful for designers of animation systems both in the entertainment industry and in scientific research. They reveal a significant overall benefit to using captured nonlinear geometric vertex motion over linear blend shape motion. However, our findings suggest that not all motions need to be animated nonlinearly. The advantage may depend on the type of facial action being produced and the phase of the movement.