Realistic modeling for facial animation
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
Synthesizing realistic facial expressions from photographs
Proceedings of the 25th annual conference on Computer graphics and interactive techniques
An Introduction to Text-to-Speech Synthesis
An Introduction to Text-to-Speech Synthesis
Modeling and Rendering for Realistic Facial Animation
Proceedings of the Eurographics Workshop on Rendering Techniques 2000
Xface: MPEG-4 based open source toolkit for 3D Facial Animation
Proceedings of the working conference on Advanced visual interfaces
Persona-AIML: An Architecture Developing Chatterbots with Personality
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 3
HMM-based synthesis of emotional facial expressions during speech in synthetic talking heads
Proceedings of the 8th international conference on Multimodal interfaces
A survey of partial differential equations in geometric design
The Visual Computer: International Journal of Computer Graphics
PDE-Based Facial Animation: Making the Complex Simple
ISVC '08 Proceedings of the 4th International Symposium on Advances in Visual Computing, Part II
Hi-index | 0.00 |
In this work we propose a talking head system based on animating facial expressions using a template face generated from a Partial Differential Equation (PDE). It uses a set of pre-configured curves (as boundary conditions for the chosen PDE) to calculate an internal template surface face. This surface is then used to associate various facial features with a given 3D face object. Motion retargeting is then used to transfer the deformations in these areas from the template to the target object. The procedure is continued until all the expressions in the database are calculated and transferred to the target 3D human face object. Additionally the system interacts with the user using an artificial intelligence (AI) chatterbot to generate response from a given text. Speech and facial animation are synchronized using the Microsoft Speech API, whereby the response from the AI bot is converted to speech.