Human-style interaction with a robot for cooperative learning of scene objects

  • Authors:
  • Shuyin Li;Axel Haasch;Britta Wrede;Jannik Fritsch;Gerhard Sagerer

  • Affiliations:
  • Bielefeld University, Bielefeld, Germany;Bielefeld University, Bielefeld, Germany;Bielefeld University, Bielefeld, Germany;Bielefeld University, Bielefeld, Germany;Bielefeld University, Bielefeld, Germany

  • Venue:
  • ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In research on human-robot interaction the interest is currently shifting from uni-modal dialog systems to multi-modal interaction schemes. We present a system for human-style interaction with a robot that is integrated on our mobile robot BIRON. To model the dialog we adopt an extended grounding concept with a mechanism to handle multi-modal in- and output where object references are resolved by the interaction with an object attention system (OAS). The OAS integrates multiple input from, e.g., the object and gesture recognition systems and provides the information for a common representation. This representation can be accessed by both modules and combines symbolic verbal attributes with sensor-based features. We argue that such a representation is necessary to achieve a robust and efficient information processing.