Creating Speech-Synchronized Animation
IEEE Transactions on Visualization and Computer Graphics
Transferring of Speech Movements from Video to 3D Face Space
IEEE Transactions on Visualization and Computer Graphics
Assembling an expressive facial animation system
Proceedings of the 2007 ACM SIGGRAPH symposium on Video games
When and How to Smile: Emotional Expression for 3D Conversational Agents
Agent Computing and Multi-Agent Systems
Interpreting Human and Avatar Facial Expressions
INTERACT '09 Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part I
Vision based speech animation transferring with underlying anatomical structure
ACCV'06 Proceedings of the 7th Asian conference on Computer Vision - Volume Part I
Hi-index | 0.00 |
We introduce a facial animation system that produces real-time animation sequences including speech synchronization and non-verbal speech-related facial expressions from plain text input. A state-of-the-art text-to-speech synthesis component performs linguistic analysis of the text input and creates a speech signal from phonetic and intonation information. The phonetic transcription is additionally used to drive a speech synchronization method for the physically based facial animation. Further high-level information from the linguistic analysis such as different types of accents or pauses as well as the type of the sentence is used to generate non-verbal speech-related facial expressions such as movement of head, eyes, and eyebrows or voluntary eye blinks. Moreover, emoticons are translated into XML markup that triggers emotional facial expressions.