Least-Squares Fitting of Two 3-D Point Sets
IEEE Transactions on Pattern Analysis and Machine Intelligence
Video Rewrite: driving visual speech with audio
Proceedings of the 24th annual conference on Computer graphics and interactive techniques
Lip movement synthesis from speech based on hidden Markov models
Speech Communication - Special issue on auditory-visual speech processing
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Speech-driven cartoon animation with emotions
MULTIMEDIA '01 Proceedings of the ninth ACM international conference on Multimedia
MikeTalk: A Talking Facial Display Based on Morphing Visemes
CA '98 Proceedings of the Computer Animation
Emotional Chinese talking head system
Proceedings of the 6th international conference on Multimodal interfaces
Dynamic mapping method based speech driven face animation system
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
Animating expressive faces across languages
IEEE Transactions on Multimedia
Speech-driven facial animation with realistic dynamics
IEEE Transactions on Multimedia
Real-time speech-driven face animation with expressions using neural networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In the paper, we present a framework of speech driven face animation system with expressions. It systematically addresses audio-visual data acquisition, expressive trajectory analysis and audio-visual mapping. Based on this framework, we learn the correlation between neutral facial deformation and expressive facial deformation with Gaussian Mixture Model (GMM). A hierarchical structure is proposed to map the acoustic parameters to lip FAPs. Then the synthesized neutral FAP streams will be extended with expressive variations according to the prosody of the input speech. The quantitative evaluation of the experimental result is encouraging and the synthesized face shows a realistic quality.