Free-form deformation of solid geometric models
SIGGRAPH '86 Proceedings of the 13th annual conference on Computer graphics and interactive techniques
Realistic modeling for facial animation
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
An introduction to text-to-speech synthesis
An introduction to text-to-speech synthesis
Synthesizing realistic facial expressions from photographs
Proceedings of the 25th annual conference on Computer graphics and interactive techniques
Analyzing Facial Expressions for Virtual Conferencing
IEEE Computer Graphics and Applications
Dirichlet Free-Form Deformations and their Application to Hand Simulation
CA '97 Proceedings of the Computer Animation
MPEG-4 Compatible Faces from Orthogonal Photos
CA '99 Proceedings of the Computer Animation
Automatic adaptation of a face model using action units for semantic coding of videophone sequences
IEEE Transactions on Circuits and Systems for Video Technology
IEEE Transactions on Circuits and Systems for Video Technology
MPEG-4 facial animation technology: survey, implementation, and results
IEEE Transactions on Circuits and Systems for Video Technology
Charisma: High-performance Web-based MPEG-compliant animation framework
Computers in Entertainment (CIE) - Special Issue: Advances in Computer Entertainment Technology
The application of MPEG-4 compliant animation to a modern games engine and animation framework
MIG'10 Proceedings of the Third international conference on Motion in games
Hi-index | 0.00 |
This paper studies a new method for three-dimensional (3D) facial model adaptation and its integration into a text-to-speech (TTS) system. The 3D facial adaptation requires a set of two orthogonal views of the user's face with a number of feature points located on both views. Based on the correspondences of the feature points' positions, a generic face model is deformed nonrigidly treating every facial part as a separate entity. A cylindrical texture map is then built from the two image views. The generated head models are compared to corresponding models obtained by the commonly used adaptation method that utilizes 3D radial bases functions. The generated 3D models are integrated into a talking head system, which consists of two distinct parts: a multilingual text-to-speech sub-system and an MPEG-4 compliant facial animation sub-system. Support for the Greek language has been added, while preserving lip and speech synchronization.