Attention driven visual processing for an interactive dialog robot

  • Authors:
  • Thomas Müller;Alois Knoll

  • Affiliations:
  • Technische Universtiät München, Garching/München/Germany;Technische Universtiät München, Garching/München/Germany

  • Venue:
  • Proceedings of the 2009 ACM symposium on Applied Computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we propose an attention-based vision system for the JAST interactive dialog robot. The robotic vision system incorporates three submodules: object recognition, gesture recognition and self recognition. The performance boost of our biologically inspired vision system is based on two assumptions: first, generally attention is attracted by regions of high intensity or hue gardients as well as scene dynamics (bottom-up attention attraction), and second, attentioninal focus can be directed by higher level modules, whether volitional or not, in an inhibitory or reinforcing way (top-down attention control). The system proposed in this paper is able to utilize these assumptions and organize its computational efforts accordingly. Integrated into an efficient data management architecture, the vision system is capable of continuously publishing results to the cognitive layer of the robot and thus enables operations in realtime. Furthermore, the modular system structure and the asynchronous communication paradigm allows for efficient integration of additional modules, be it visual or any other sensory input data. The main contribution of this work is the application of neuroscience findings and biologically plausible theories of attention based visual processing to a real-world robotic setup. Here, our experimental results show tremendous speed-ups using either the bottom-up attention attractors or the principle of top-down attention control as input data filters for further visual analysis, reaching the peak in a combination of the two.