Searching for Prototypical Facial Feedback Signals

  • Authors:
  • Dirk Heylen;Elisabetta Bevacqua;Marion Tellier;Catherine Pelachaud

  • Affiliations:
  • Human Media Interaction Group - Departement of Computer Science, University of Twente, The Netherlands;LINC - IUT de Montreuil, University of Paris 8, France;LINC - IUT de Montreuil, University of Paris 8, France;LINC - IUT de Montreuil, University of Paris 8, France

  • Venue:
  • IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Embodied conversational agents should be able to provide feedback on what a human interlocutor is saying. We are compiling a list of facial feedback expressions that signal attention and interest, grounding and attitude. As expressions need to serve many functions at the same time and most of the component signals are ambiguous, it is important to get a better idea of the many to many mappings between displays and functions. We asked people to label several dynamic expressions as a probe into this semantic space. We compare simple signals and combined signals in order to find out whether a combination of signals can have a meaning on its own or not, i. e. the meaning of single signals is different from the meaning attached to the combination of these signals. Results show that in some cases a combination of signals alters the perceived meaning of the backchannel.