Automatic Generation of Conversational Behavior for Multiple Embodied Virtual Characters: The Rules and Models behind Our System

  • Authors:
  • Werner Breitfuss;Helmut Prendinger;Mitsuru Ishizuka

  • Affiliations:
  • Graduate School of Information Science and Technology, University of Tokyo, Tokyo, Japan;National Institute of Informatics, , Tokyo, Japan;Graduate School of Information Science and Technology, University of Tokyo, Tokyo, Japan

  • Venue:
  • IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we presented the rules and algorithms we use to automatically generate non-verbal behavior like gestures and gaze for two embodied virtual agents. They allow us to transform a dialogue in text format into an agent behavior script enriched by eye gaze and conversational gesture behavior. The agents' gaze behavior is informed by theories of human face-to-face gaze behavior. Gestures are generated based on the analysis of linguistic and contextual information of the input text. Since all behaviors are generated automatically, our system offers content creators a convenient method to compose multimodal presentations, a task that would otherwise be very cumbersome and time consuming.