The media equation: how people treat computers, television, and new media like real people and places
BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
GAZE-2: conveying eye contact in group video conferencing using eye-controlled camera direction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Towards integrated microplanning of language and iconic gesture for multimodal output
Proceedings of the 6th international conference on Multimodal interfaces
A model of attention and interest using Gaze behavior
Lecture Notes in Computer Science
Creating Rapport with Virtual Agents
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Extending MPML3D to Second Life
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Creativity meets automation: combining nonverbal action authoring with rules and machine learning
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Hi-index | 0.00 |
This paper presents a system capable of automatically adding gestures to an embodied virtual character processing information from a simple text input. Gestures are generated based on the analysis of linguistic and contextual information of the input text. The system is embedded in the virtual world called second life and consists of an in world object and an off world server component that handles the analysis. Either a user controlled avatar or a non user controlled character can be used to display the gestures, that are timed with speech output from an Text-to-Speech system, and so show non verbal behavior without pushing the user to manually select it.