Algorithms for controlling cooperation between output modalities in 2D embodied conversational agents

  • Authors:
  • Sarkis Abrilian;Jean-Claude Martin;Stéphanie Buisine

  • Affiliations:
  • LIMSI-CNRS, Orsay Cedex, France;LIMSI-CNRS, Orsay Cedex, France & LINC-Univ Paris 8, Montreuil, France;LIMSI-CNRS, Orsay Cedex, France

  • Venue:
  • Proceedings of the 5th international conference on Multimodal interfaces
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent advances in the specification of the multimodal behavior of Embodied Conversational Agents (ECA) have proposed a direct and deterministic one-step mapping from high-level specifications of dialog state or agent emotion onto low-level specifications of the multimodal behavior to be displayed by the agent (e.g. facial expression, gestures, vocal utterance). The difference of abstraction between these two levels of specification makes difficult the definition of such a complex mapping. In this paper we propose an intermediate level of specification based on combinations between modalities (e.g. redundancy, complementarity). We explain how such intermediate level specifications can be described using XML in the case of deictic expressions. We define algorithms for parsing such descriptions and generating the corresponding multimodal behavior of 2D cartoon-like conversational agents. Some random selection has been introduced in these algorithms in order to induce some "natural variations" in the agent's behavior. We conclude on the usefulness of this approach for the design of ECA.