Contextual factors and adaptative multimodal human-computer interaction: multi-level specification of emotion and expressivity in embodied conversational agents

  • Authors:
  • Myriam Lamolle;Maurizio Mancini;Catherine Pelachaud;Sarkis Abrilian;Jean-Claude Martin;Laurence Devillers

  • Affiliations:
  • LINC, IUT de Montreuil, University Paris 8, Montreuil, France;LINC, IUT de Montreuil, University Paris 8, Montreuil, France;LINC, IUT de Montreuil, University Paris 8, Montreuil, France;LIMSI-CNRS, Orsay;LIMSI-CNRS, Orsay;LIMSI-CNRS, Orsay

  • Venue:
  • CONTEXT'05 Proceedings of the 5th international conference on Modeling and Using Context
  • Year:
  • 2005

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper we present an Embodied Conversational Agent (ECA) model able to display rich verbal and non-verbal behaviors. The selection of these behaviors should depend not only on factors related to her individuality such as her culture, her social and professional role, her personality, but also on a set of contextual variables (such as her interlocutor, the social conversation setting), and other dynamic variables (belief, goal, emotion). We describe the representation scheme and the computational model of behavior expressivity of the Expressive Agent System that we have developed. We explain how the multi-level annotation of a corpus of emotionally rich TV video interviews can provide context-dependent knowledge as input for the specification of the ECA (e.g. which contextual cues and levels of representation are required for enabling the proper recognition of the emotions).