Generating robot/agent backchannels during a storytelling experiment

  • Authors:
  • S. Al Moubayed;M. Baklouti;M. Chetouani;T. Dutoit;A. Mahdhaoui;J.-C. Martin;S. Ondas;C. Pelachaud;J. Urbain;M. Yilmaz

  • Affiliations:
  • Center for Speech Technology, Royal Institute of Technology, KTH, Sweden;Thalès, France;University Pierre and Marie Curie, France;Faculté Polytechnique de Mons, Belgium;University Pierre and Marie Curie, France;LIMSI, France;Technical University of Kosice, Slovakia;INRIA, France;Faculté Polytechnique de Mons, Belgium;Koc University, Turkey

  • Venue:
  • ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This work presents the development of a real-time framework for the research of Multimodal Feedback of Robots/Talking Agents in the context of Human Robot Interaction (HRI) and Human Computer Interaction (HCI). For evaluating the framework, a Multimodal corpus is built (ENTERFACE STEAD), and a study on the important multimodal features was done for building an active Robot/Agent listener of a storytelling experience with Humans. The experiments show that even when building the same reactive behavior models for Robot and Talking Agents, the interpretation and the realization of the behavior communicated is different due to the different communicative channels Robots/Agents offer be it physical but less human-like in Robots, and virtual but more expressive and human-like in Talking agents.