Speech and expression: a computer solution to face animation
Proceedings on Graphics Interface '86/Vision Interface '86
Automated lip-synch and speech synthesis for character animation
CHI '87 Proceedings of the SIGCHI/GI Conference on Human Factors in Computing Systems and Graphics Interface
Computer Animation '90
Performance-driven facial animation
SIGGRAPH '90 Proceedings of the 17th annual conference on Computer graphics and interactive techniques
SIGGRAPH '94 Proceedings of the 21st annual conference on Computer graphics and interactive techniques
Realistic modeling for facial animation
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
Computer facial animation
Video Rewrite: driving visual speech with audio
Proceedings of the 24th annual conference on Computer graphics and interactive techniques
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Head shop: generating animated head models with anatomical structure
Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation
Trainable videorealistic speech animation
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
IEEE Computer Graphics and Applications
A Muscle-based 3D Parametric Lip Model for Speech-Synchronized Facial Animation
DEFORM '00/AVATARS '00 Proceedings of the IFIP TC5/WG5.10 DEFORM'2000 Workshop and AVATARS'2000 Workshop on Deformable Avatars
3D Models Of The Lips For Realistic Speech Animation
CA '96 Proceedings of the Computer Animation
MikeTalk: A Talking Facial Display Based on Morphing Visemes
CA '98 Proceedings of the Computer Animation
"May I talk to you?: -)" " Facial Animation from Text
PG '02 Proceedings of the 10th Pacific Conference on Computer Graphics and Applications
A Three-Dimensional Model of Human Lip Motions Trained from Video
NAM '97 Proceedings of the 1997 IEEE Workshop on Motion of Non-Rigid and Articulated Objects (NAM '97)
A parametric model for human faces.
A parametric model for human faces.
Expressive Facial Animation Synthesis by Learning Speech Coarticulation and Expression Spaces
IEEE Transactions on Visualization and Computer Graphics
Transferring of Speech Movements from Video to 3D Face Space
IEEE Transactions on Visualization and Computer Graphics
eFASE: expressive facial animation synthesis and editing with phoneme-isomap controls
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
Simulating speech with a physics-based facial muscle model
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
Is a 3-D image necessary to determine eye gaze?
Journal of Computing Sciences in Colleges
Dynamic, expressive speech animation from a single mesh
SCA '07 Proceedings of the 2007 ACM SIGGRAPH/Eurographics symposium on Computer animation
Towards a Northern Sotho talking head
AFRIGRAPH '07 Proceedings of the 5th international conference on Computer graphics, virtual reality, visualisation and interaction in Africa
Three dimensional articulator model for speech acquisition by children with hearing loss
UAHCI'07 Proceedings of the 4th international conference on Universal access in human computer interaction: coping with diversity
Designing an expressive avatar of a real person
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Hi-index | 0.00 |
We present a facial model designed primarily to support animated speech. Our facial model takes facial geometry as input and transforms it into a parametric deformable model. The facial model uses a muscle-based parameterization, allowing for easier integration between speech synchrony and facial expressions. Our facial model has a highly deformable lip model that is grafted onto the input facial geometry to provide the necessary geometric complexity needed for creating lip shapes and high-quality renderings. Our facial model also includes a highly deformable tongue model that can represent the shapes the tongue undergoes during speech. We add teeth, gums, and upper palate geometry to complete the inner mouth. To decrease the processing time, we hierarchically deform the facial surface. We also present a method to animate the facial model over time to create animated speech using a model of coarticulation that blends visemes together using dominance functions. We treat visemes as a dynamic shaping of the vocal tract by describing visemes as curves instead of keyframes. We show the utility of the techniques described in this paper by implementing them in a text-to-audiovisual-speech system that creates animation of speech from unrestricted text. The facial and coarticulation models must first be interactively initialized. The system then automatically creates accurate real-time animated speech from the input text. It is capable of cheaply producing tremendous amounts of animated speech with very low resource requirements.