Enabling Multimodal Human–Robot Interaction for the Karlsruhe Humanoid Robot

  • Authors:
  • R. Stiefelhagen;H. K. Ekenel;C. Fugen;P. Gieselmann;H. Holzapfel;F. Kraft;K. Nickel;M. Voit;A. Waibel

  • Affiliations:
  • Univ. Karlsruhe, Karlsruhe;-;-;-;-;-;-;-;-

  • Venue:
  • IEEE Transactions on Robotics
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present our work in building technologies for natural multimodal human-robot interaction. We present our systems for spontaneous speech recognition, multimodal dialogue processing, and visual perception of a user, which includes localization, tracking, and identification of the user, recognition of pointing gestures, as well as the recognition of a person's head orientation. Each of the components is described in the paper and experimental results are presented. We also present several experiments on multimodal human-robot interaction, such as interaction using speech and gestures, the automatic determination of the addressee during human-human-robot interaction, as well on interactive learning of dialogue strategies. The work and the components presented here constitute the core building blocks for audiovisual perception of humans and multimodal human-robot interaction used for the humanoid robot developed within the German research project (Sonderforschungsbereich) on humanoid cooperative robots.