SIGGRAPH '94 Proceedings of the 21st annual conference on Computer graphics and interactive techniques
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Visual Prosody: Facial Movements Accompanying Speech
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Combination of Facial Movements on a 3D Talking Head
CGI '04 Proceedings of the Computer Graphics International
Audio-based head motion synthesis for Avatar-based telepresence systems
Proceedings of the 2004 ACM SIGMM workshop on Effective telepresence
Mood swings: expressive speech animation
ACM Transactions on Graphics (TOG)
[HUGE]: universal architecture for statistically based HUman GEsturing
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Speech-driven facial animation with realistic dynamics
IEEE Transactions on Multimedia
Multimodal behavior realization for embodied conversational agents
Multimedia Tools and Applications
On creating multimodal virtual humans--real time speech driven facial gesturing
Multimedia Tools and Applications
COST'09 Proceedings of the Second international conference on Development of Multimodal Interfaces: active Listening and Synchrony
Hi-index | 0.00 |
In our current work we concentrate on finding correlation between speech signal and occurrence of facial gestures. Motivation behind this work is computer-generated human correspondent, ECA. In order to have a believable human representative it is important for an ECA to implement facial gestures in addition to verbal and emotional displays. Information needed for generation of facial gestures is extracted from speech prosody by analyzing natural speech in real-time. This work is based on the previously developed HUGE architecture for statistically-based facial gesturing and extends our previous work on automatic real-time lip sync.