Crossmodal content binding in information-processing architectures

  • Authors:
  • Henrik Jacobsson;Nick Hawes;Geert-Jan Kruijff;Jeremy Wyatt

  • Affiliations:
  • DFKI GmbH, Saarbruecken, Germany;University of Birmingham, Birmingham, United Kingdom;DFKI GmbH, Saarbruecken, Germany;University of Birmingham, Birmingham, United Kingdom

  • Venue:
  • Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Operating in a physical context, an intelligent robot faces two fundamental problems. First, it needs to combine information from its different sensors to form a representation of the environment that is more complete than any representation a single sensor could provide. Second, it needs to combine high-level representations (such as those for planning and dialogue) with sensory information, to ensure that the interpretations of these symbolic representations are grounded in the situated context. Previous approaches to this problem have used techniques such as (low-level) information fusion, ontological reasoning, and (high-level) concept learning. This paper presents a framework in which these, and related approaches, can be used to form a shared representation of the current state of the robot in relation to its environment and other agents. Preliminary results from an implemented system are presented to illustrate how the framework supports behaviours commonly required of an intelligent robot.