3D Audiovisual Rendering and Real-Time Interactive Control of Expressivity in a Talking Head
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Generating Embodied Descriptions Tailored to User Preferences
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Comparing rule-based and data-driven selection of facial displays
EmbodiedNLP '07 Proceedings of the Workshop on Embodied Language Processing
Interleaved preparation and output in the COMIC fission module
Software '05 Proceedings of the Workshop on Software
Language, embodiment and social intelligence
INLG '08 Proceedings of the Fifth International Natural Language Generation Conference
INLG '08 Proceedings of the Fifth International Natural Language Generation Conference
Enhancing human-computer interaction with embodied conversational agents
UAHCI'07 Proceedings of the 4th international conference on Universal access in human-computer interaction: ambient interaction
Understanding RUTH: creating believable behaviors for a virtual human under uncertainty
ICDHM'07 Proceedings of the 1st international conference on Digital human modeling
Modeling facial expression of uncertainty in conversational animation
ZiF'06 Proceedings of the Embodied communication in humans and machines, 2nd ZiF research group international conference on Modeling communication with robots and virtual humans
User Modeling and User-Adapted Interaction
Virtual character performance from speech
Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Hi-index | 0.00 |
People highlight the intended interpretation of their utterances within a larger discourse by a diverse set of non-verbal signals. These signals represent a key challenge for animated conversational agents because they are pervasive, variable, and need to be coordinated judiciously in an effective contribution to conversation. In this paper, we describe a freely available cross-platform real-time facial animation system, RUTH, that animates such high-level signals in synchrony with speech and lip movements. RUTH adopts an open, layered architecture in which fine-grained features of the animation can be derived by rule from inferred linguistic structure, allowing us to use RUTH, in conjunction with annotation of observed discourse, to investigate the meaningful high-level elements of conversational facial movement for American English speakers. Copyright © 2004 John Wiley & Sons, Ltd.