Realizing multimodal behavior: closing the gap between behavior planning and embodied agent presentation

  • Authors:
  • Michael Kipp;Alexis Heloir;Marc Schröder;Patrick Gebhard

  • Affiliations:
  • DFKI, Saarbrücken, Germany;DFKI, Saarbrücken, Germany;DFKI, Saarbrücken, Germany;DFKI, Saarbrücken, Germany

  • Venue:
  • IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Generating coordinated multimodal behavior for an embodied agent (speech, gesture, facial expression...) is challenging. It requires a high degree of animation control, in particular when reactive behaviors are required. We suggest to distinguish realization planning, where gesture and speech are processed symbolically using the behavior markup language (BML), and presentation which is controlled by a lower level animation language (EMBRScript). Reactive behaviors can bypass planning and directly control presentation. In this paper, we show how to define a behavior lexicon, how this lexicon relates to BML and how to resolve timing using formal constraint solvers. We conclude by demonstrating how to integrate reactive emotional behaviors.