BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
GAZE-2: conveying eye contact in group video conferencing using eye-controlled camera direction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Life-Like Characters: Tools, Affective Functions, and Applications (Cognitive Technologies)
Life-Like Characters: Tools, Affective Functions, and Applications (Cognitive Technologies)
Towards integrated microplanning of language and iconic gesture for multimodal output
Proceedings of the 6th international conference on Multimodal interfaces
Design and evaluation of expressive gesture synthesis for embodied conversational agents
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
A model of attention and interest using Gaze behavior
Lecture Notes in Computer Science
A cognitively based approach to affect sensing from text
Proceedings of the 11th international conference on Intelligent user interfaces
MPML3D: a reactive framework for the multimodal presentation markup language
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Creativity meets automation: combining nonverbal action authoring with rules and machine learning
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Learning a model of speaker head nods using gesture corpora
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
SceneMaker: automatic visualisation of screenplays
KI'09 Proceedings of the 32nd annual German conference on Advances in artificial intelligence
Evaluating models of speaker head nods for virtual agents
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
SceneMaker: multimodal visualisation of natural language film scripts
KES'10 Proceedings of the 14th international conference on Knowledge-based and intelligent information and engineering systems: Part IV
SceneMaker: intelligent multimodal visualization of natural language scripts
AICS'09 Proceedings of the 20th Irish conference on Artificial intelligence and cognitive science
Providing gender to embodied conversational agents
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Hi-index | 0.00 |
In this paper we introduce a system that automatically adds different types of non-verbal behavior to a given dialogue script between two virtual embodied agents. It allows us to transform a dialogue in text format into an agent behavior script enriched by eye gaze and conversational gesture behavior. The agents' gaze behavior is informed by theories of human face-to-face gaze behavior. Gestures are generated based on the analysis of linguistic and contextual information of the input text. The resulting annotated dialogue script is then transformed into the Multimodal Presentation Markup Language for 3D agents (MPML3D), which controls the multi-modal behavior of animated life-like agents, including facial and body animation and synthetic speech. Using our system makes it very easy to add appropriate non-verbal behavior to a given dialogue text, a task that would otherwise be very cumbersome and time consuming.