Creating standardized video recordings of multimodal interactions across cultures

  • Authors:
  • Matthias Rehm;Elisabeth André;Nikolaus Bee;Birgit Endrass;Michael Wissner;Yukiko Nakano;Afia Akhter Lipi;Toyoaki Nishida;Hung-Hsuan Huang

  • Affiliations:
  • Augsburg University, Institute of Computer Science, Augsburg, Germany;Augsburg University, Institute of Computer Science, Augsburg, Germany;Augsburg University, Institute of Computer Science, Augsburg, Germany;Augsburg University, Institute of Computer Science, Augsburg, Germany;Augsburg University, Institute of Computer Science, Augsburg, Germany;Dept. of Computer and Information Science, Faculty of Science and Technology, Seikei University, Japan;Dept. of Computer and Information Science, Faculty of Science and Technology, Seikei University, Japan;Dept. of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Japan;Dept. of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Japan

  • Venue:
  • Multimodal corpora
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Trying to adapt the behavior of an interactive system to the cultural background of the user requires information on how relevant behaviors differ as a function of the user's cultural background. To gain such insights in the interrelation of culture and behavior patterns, the information from the literature is often too anecdotal to serve as the basis for modeling a system's behavior, making it necessary to collect multimodal corpora in a standardized fashion in different cultures. In this chapter, the challenges of such an endeavor are introduced and solutions are presented by examples from a German-Japanese project that aims at modeling culture-specific behaviors for Embodied Conversational Agents.