Recognizing Facial Expressions in Image Sequences Using Local Parameterized Models of Image Motion
International Journal of Computer Vision
Trainable videorealistic speech animation
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Computer generated animation of faces
ACM '72 Proceedings of the ACM annual conference - Volume 1
Spotting Segments Displaying Facial Expression from Image Sequences Using HMM
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
Face Analysis for the Synthesis of Photo-Realistic Talking Heads
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Energy minimization for extracting mouth curves in a facial image
IIS '97 Proceedings of the 1997 IASTED International Conference on Intelligent Information Systems (IIS '97)
Automatic snakes for robust lip boundaries extraction
ICASSP '99 Proceedings of the Acoustics, Speech, and Signal Processing, 1999. on 1999 IEEE International Conference - Volume 06
Automatic Lip Tracking: Bayesian Segmentation and Active Contours in a Cooperative Scheme
ICMCS '99 Proceedings of the IEEE International Conference on Multimedia Computing and Systems - Volume 2
IEEE Transactions on Image Processing
Emotion recognition from text using semantic labels and separable mixture models
ACM Transactions on Asian Language Information Processing (TALIP)
Expressive Face Animation Synthesis Based on Dynamic Mapping Method
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
Realistic visual speech synthesis based on hybrid concatenation method
IEEE Transactions on Audio, Speech, and Language Processing - Special issue on multimodal processing in speech-based interactions
Personalized facial animation based on 3d model fitting from two orthogonal face images
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
Hi-index | 0.00 |
Natural Human-Computer Interface requires integration of realistic audio and visual information for perception and display. In this paper, a lifelike talking head system is proposed. The system converts text to speech with synchronized animation of mouth movements and emotion expression. The talking head is based on a generic 3D human head model. The personalized model is incorporated into the system. With texture mapping, the personalized model offers a more natural and realistic look than the generic model. To express emotion, both emotional speech synthesis and emotional facial animation are integrated and Chinese viseme models are also created in the paper. Finally, the emotional talking head system is created to generate the natural and vivid audio-visual output.