Communication and coarticulation in facial animation
Communication and coarticulation in facial animation
SIGGRAPH '94 Proceedings of the 21st annual conference on Computer graphics and interactive techniques
Realistic modeling for facial animation
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
Computer facial animation
Video Rewrite: driving visual speech with audio
Proceedings of the 24th annual conference on Computer graphics and interactive techniques
EM algorithms for PCA and SPCA
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Real-time texture synthesis by patch-based sampling
ACM Transactions on Graphics (TOG)
Facial animation framework for the web and mobile platforms
Proceedings of the seventh international conference on 3D Web technology
Trainable videorealistic speech animation
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Motion capture assisted animation: texturing and synthesis
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Interactive multiresolution hair modeling and editing
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Principal Components of Expressive Speech Animation
CGI '01 Computer Graphics International 2001
Geometry-based muscle modeling for facial animation
GRIN'01 No description on Graphics interface 2001
Facial Expression Space Learning
PG '02 Proceedings of the 10th Pacific Conference on Computer Graphics and Applications
An example-based approach for facial expression cloning
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
Geometry-driven photorealistic facial expression synthesis
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
Unsupervised learning for speech motion editing
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
Labial Coarticulation Modeling for Realistic Facial Animation
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Texture Synthesis by Non-Parametric Sampling
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
Visual Prosody: Facial Movements Accompanying Speech
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Real-time speech motion synthesis from recorded motions
SCA '04 Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation
Reducing blendshape interference by selected motion attenuation
Proceedings of the 2005 symposium on Interactive 3D graphics and games
Automated Eye Motion Using Texture Synthesis
IEEE Computer Graphics and Applications
Creating Speech-Synchronized Animation
IEEE Transactions on Visualization and Computer Graphics
Computer Animation and Virtual Worlds
Expressive audio-visual speech: Research Articles
Computer Animation and Virtual Worlds - Special Issue: The Very Best Papers from CASA 2004
Natural head motion synthesis driven by acoustic prosodic features: Virtual Humans and Social Agents
Computer Animation and Virtual Worlds - CASA 2005
Animating blendshape faces by cross-mapping motion capture data
I3D '06 Proceedings of the 2006 symposium on Interactive 3D graphics and games
Synthesizing speech animation by learning compact speech co-articulation models
CGI '05 Proceedings of the Computer Graphics International 2005
eFASE: expressive facial animation synthesis and editing with phoneme-isomap controls
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
Dynamic, expressive speech animation from a single mesh
SCA '07 Proceedings of the 2007 ACM SIGGRAPH/Eurographics symposium on Computer animation
Towards a Northern Sotho talking head
AFRIGRAPH '07 Proceedings of the 5th international conference on Computer graphics, virtual reality, visualisation and interaction in Africa
Interactive 3D facial expression posing through 2D portrait manipulation
GI '08 Proceedings of graphics interface 2008
Model-based synthesis of visual speech movements from 3D video
SIGGRAPH '09: Posters
An Approach for Creating and Blending Synthetic Facial Expressions of Emotion
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
Real-time 3D talking head from a synthetic viseme dataset
Proceedings of the 8th International Conference on Virtual Reality Continuum and its Applications in Industry
Visyllable-specific facial transition motion embedding and extraction
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
Proceedings of the Seventh Indian Conference on Computer Vision, Graphics and Image Processing
ACM Transactions on Interactive Intelligent Systems (TiiS)
Perceiving visual emotions with speech
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Dimensionality reduction for computer facial animation
Expert Systems with Applications: An International Journal
Lip-synced character speech animation with dominated animeme models
SIGGRAPH Asia 2012 Technical Briefs
Virtual character performance from speech
Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Hi-index | 0.01 |
Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns” speech coarticulation models for diphones and triphones from the recorded data. A Phoneme-Independent Expression Eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and Principal Component Analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation.