Multimodal expressive embodied conversational agents
Proceedings of the 13th annual ACM international conference on Multimedia
Expressive Facial Animation Synthesis by Learning Speech Coarticulation and Expression Spaces
IEEE Transactions on Visualization and Computer Graphics
3D Audiovisual Rendering and Real-Time Interactive Control of Expressivity in a Talking Head
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Embodied Creative Agents: A Preliminary Social-Cognitive Framework
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Proceedings of the 16th International Conference on 3D Web Technology
Providing training in GSD by using a virtual environment
PROFES'12 Proceedings of the 13th international conference on Product-Focused Software Process Improvement
Hi-index | 0.00 |
We aim at the realization of an Embodied Conversational Agent able to interact naturally and emotionally with user. In particular, the agent should behave expressively. Specifying for a given emotion, its corresponding facial expression will not produce the sensation of expressivity. To do so, one needs to specify parameters such as intensity, tension, movement property. Moreover, emotion affects also lip shapes during speech. Simply adding the facial expression of emotion to the lip shape does not produce lip readable movement. In this paper we present a model based on real data from a speaker on which was applied passive markers. The real data covers natural speech as well as emotional speech. We present an algorithm that determines the appropriate viseme and applies coarticulation and correlation rules to consider the vocalic and the consonantal contexts as well as muscular phenomena such as lip compression and lip stretching. Expressive qualifiers are then used to modulate the expressivity of lip movement. Our model of lip movement is applied on a 3D facial model compliant with MPEG-4 standard. Copyright © 2004 John Wiley & Sons, Ltd.