Multimodal communication between synthetic agents

  • Authors:
  • Catherine Pelachaud;Isabella Poggi

  • Affiliations:
  • Università di Roma "La Sapienza", Via Buonarroti, 12, Rome Italy;Università di Roma Tre, Via del Castro Pretorio, 20, Rome Italy

  • Venue:
  • AVI '98 Proceedings of the working conference on Advanced visual interfaces
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

Dialoging with a synthetic agent is a vast research topic to enhance user-interface friendliness. We present in this paper an on-going project on the simulation of a dialog situation between two synthetic agents. More particularly we focus our interest on finding the appropriate facial expressions of a speaker addressing to different types of listeners (tourist, employee, child, and so on) using various linguistic forms such as request, question, information. Communication between speaker and listener involves multimodal behaviors such as the choice of words, intonation and paralinguistic parameters for the vocal ones; facial expressions, gaze, gesture and body movements for the non-verbal ones. The choice of each individual behavior, their mutual interaction and synchronization produce the richness and subtility of human communication.In order to develop a system that computes automatically the appropriate facial and gaze behaviors corresponding to a communicative act for a given speaker and listener, our first step is to categorize facial expressions and gaze based on their communicative functions rather than on their appearance. The next step is to find inference rules that describe the "mental" process ongoing in the speaker while communicating with the listener. The rules take into account the power relation between speaker and listener and the beliefs the speaker has about the listener to constrain the choice of performative acts.