Least-Squares Fitting of Two 3-D Point Sets
IEEE Transactions on Pattern Analysis and Machine Intelligence
Communicative facial displays as a new conversational modality
CHI '93 Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems
Animating images with drawings
SIGGRAPH '94 Proceedings of the 21st annual conference on Computer graphics and interactive techniques
SIGGRAPH '94 Proceedings of the 21st annual conference on Computer graphics and interactive techniques
Video Rewrite: driving visual speech with audio
Proceedings of the 24th annual conference on Computer graphics and interactive techniques
Layered compositing of facial expression
ACM SIGGRAPH 97 Visual Proceedings: The art and interdisciplinary programs of SIGGRAPH '97
Coding, Analysis, Interpretation, and Recognition of Facial Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Synthesizing realistic facial expressions from photographs
Proceedings of the 25th annual conference on Computer graphics and interactive techniques
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Performance-driven hand-drawn animation
NPAR '00 Proceedings of the 1st international symposium on Non-photorealistic animation and rendering
Recognizing Action Units for Facial Expression Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Trainable videorealistic speech animation
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Motion texture: a two-level statistical model for character motion synthesis
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Interactive motion generation from examples
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Motion capture assisted animation: texturing and synthesis
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation
ECCV '96 Proceedings of the 4th European Conference on Computer Vision-Volume I - Volume I
Modeling, Tracking and Interactive Animation of Faces and Heads Using Input from Video
CA '96 Proceedings of the Computer Animation
Real-Time Facial Animation based upon a Bank of 3D Facial Expressions
CA '98 Proceedings of the Computer Animation
Animation of Synthetic Faces in MPEG-4
CA '98 Proceedings of the Computer Animation
Making Discours Visible: Coding and Animating Conversational Facial Displays
CA '02 Proceedings of the Computer Animation
Facial Expression Space Learning
PG '02 Proceedings of the 10th Pacific Conference on Computer Graphics and Applications
An example-based approach for facial expression cloning
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
Geometry-driven photorealistic facial expression synthesis
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
Learning controls for blend shape based realistic facial animation
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
Vision-based control of 3D facial animation
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
Unsupervised learning for speech motion editing
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
Separating Style and Content with Bilinear Models
Neural Computation
Eye communication in a conversational 3D synthetic agent
AI Communications
Semantic 3D motion retargeting for facial animation
APGV '06 Proceedings of the 3rd symposium on Applied perception in graphics and visualization
Capturing and animating skin deformation in human motion
ACM SIGGRAPH 2006 Papers
Facial animation in a nutshell: past, present and future
SAICSIT '06 Proceedings of the 2006 annual research conference of the South African institute of computer scientists and information technologists on IT research in developing countries
eFASE: expressive facial animation synthesis and editing with phoneme-isomap controls
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
Simulating speech with a physics-based facial muscle model
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
Dynamic, expressive speech animation from a single mesh
SCA '07 Proceedings of the 2007 ACM SIGGRAPH/Eurographics symposium on Computer animation
Towards Facial Gestures Generation by Speech Signal Analysis Using HUGE Architecture
Multimodal Signals: Cognitive and Algorithmic Issues
Face/Off: live facial puppetry
Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Real-time prosody-driven synthesis of body language
ACM SIGGRAPH Asia 2009 papers
Bilinear Models for Spatio-Temporal Point Distribution Analysis
International Journal of Computer Vision
ACM SIGGRAPH 2010 papers
High-realistic and flexible virtual presenters
AMDO'10 Proceedings of the 6th international conference on Articulated motion and deformable objects
Personalized expressive embodied conversational agent EVA
VIS '10 Proceedings of the 3rd WSEAS international conference on Visualization, imaging and simulation
Gamer's facial cloning for online interactive games
International Journal of Computer Games Technology - Special issue on cyber games and interactive entertainment
Perceptual analysis of talking avatar head movements: a quantitative perspective
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
On creating multimodal virtual humans--real time speech driven facial gesturing
Multimedia Tools and Applications
Content retargeting using parameter-parallel facial layers
SCA '11 Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Artist friendly facial animation retargeting
Proceedings of the 2011 SIGGRAPH Asia Conference
Spacetime expression cloning for blendshapes
ACM Transactions on Graphics (TOG)
Bilinear spatiotemporal basis models
ACM Transactions on Graphics (TOG)
A piece-wise learning approach to 3D facial animation
ICWL'07 Proceedings of the 6th international conference on Advances in web based learning
Towards ECA's animation of expressive complex behaviour
COST'10 Proceedings of the 2010 international conference on Analysis of Verbal and Nonverbal Communication and Enactment
Data-driven facial expression synthesis via Laplacian deformation
Multimedia Tools and Applications
Video-driven state-aware facial animation
Computer Animation and Virtual Worlds
Online expression mapping for performance-driven facial animation
ICEC'07 Proceedings of the 6th international conference on Entertainment Computing
Lip-synced character speech animation with dominated animeme models
SIGGRAPH Asia 2012 Technical Briefs
Dynamic units of visual speech
EUROSCA'12 Proceedings of the 11th ACM SIGGRAPH / Eurographics conference on Computer Animation
Dynamic units of visual speech
Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Virtual character performance from speech
Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Hi-index | 0.01 |
Motion capture-based facial animation has recently gained popularity in many applications, such as movies, video games, and human-computer interface designs. With the use of sophisticated facial motions from a human performer, animated characters are far more lively and convincing. However, editing motion data is difficult, limiting the potential of reusing the motion data for different tasks. To address this problem, statistical techniques have been applied to learn models of the facial motion in order to derive new motions based on the existing data. Most existing research focuses on audio-to-visual mapping and reordering of words, or on photo-realistically matching the synthesized face to the original performer. Little attention has been paid to modifying and controlling facial expression, or to mapping expressive motion onto other 3D characters.This article describes a method for creating expressive facial animation by extracting information from the expression axis of a speech performance. First, a statistical model for factoring the expression and visual speech is learned from video. This model can be used to analyze the facial expression of a new performance or modify the facial expressions of an existing performance. With the addition of this analysis of the facial expression, the facial motion can be more effectively retargeted to another 3D face model. The blendshape retargeting technique is extended to include subsets of morph targets that belong to different facial expression groups. The proportion of each subset included in a final animation is weighted according to the expression information. The resulting animation conveys much more emotion than if only the motion vectors were used for retargeting. Finally, since head motion is very important in adding liveness to facial animation, we introduces an audio-driven synthesis technique for generating new head motion.