Real-time language independent lip synchronization method using a genetic algorithm
Signal Processing - Special section: Multimodal human-computer interfaces
[HUGE]: universal architecture for statistically based HUman GEsturing
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Hi-index | 0.00 |
In this work we concentrate on finding correlation between speech signal and occurrence of facial gestures with the goal of creating believable virtual humans. We propose a method to implement facial gestures as a valuable part of human behavior and communication. Information needed for the generation of the facial gestures is extracted from speech prosody by analyzing natural speech in real time. This work is based on the previously developed HUGE architecture for statistically based facial gesturing, and extends our previous work on automatic real time lip sync.