Embodied contextual agent in information delivering application

  • Authors:
  • Catherine Pelachaud;Valeria Carofiglio;Berardina De Carolis;Fiorella de Rosis;Isabella Poggi

  • Affiliations:
  • University of Rome "La Sapienza";University of Bari;University of Bari;University of Bari;University of Rome Three

  • Venue:
  • Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

We aim at building a new human-computer interface for Information Delivering applications: the conversational agent that we have developed is a multimodal believable agent able to converse with the User by exhibiting a synchronized and coherent verbal and nonverbal behavior. The agent is provided with a personality and a social role, that allows her to show her emotion or to refrain from showing it, depending on the context in which the conversation takes place. The agent is provided with a face and a mind. The mind is designed according to a BDI structure that depends on the agent's personality; it evolves dynamically during the conversation, according to the User's dialog moves and to emotions triggered as a consequence of the Interlocutor's move; such cognitive features are then translated into facial behaviors. In this paper, we describe the overall architecture of our system and its various components; in particular, we present our dynamic model of emotions. We illustrate our results with an example of dialog all along the paper. We pay particular attention to the generation of verbal and nonverbal behaviors and to the way they are synchronized and combined with each other. We also discuss how these acts are translated into facial expressions.