Integrating context-free and context-dependent attentional mechanisms for gestural object reference

  • Authors:
  • Gunther Heidemann;Robert Rae;Holger Bekel;Ingo Bax;Helge Ritter

  • Affiliations:
  • Neuroinformatics Group, Faculty of Technology, Bielefeld University, Bielefeld, Germany;PerFact Innovation, Bielefeld, Germany;Neuroinformatics Group, Faculty of Technology, Bielefeld University, Bielefeld, Germany;Neuroinformatics Group, Faculty of Technology, Bielefeld University, Bielefeld, Germany;Neuroinformatics Group, Faculty of Technology, Bielefeld University, Bielefeld, Germany

  • Venue:
  • ICVS'03 Proceedings of the 3rd international conference on Computer vision systems
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a vision system for human-machine interaction that relies on a small wearable camera which can be mounted to common glasses. The camera views the area in front of the user, especially the hands. To evaluate hand movements for pointing gestures to objects and to recognise object reference, an approach relying on the integration of bottom-up generated feature maps and top-down propagated recognition results is introduced. In this vision system, modules for context free focus of attention work in parallel to a recognition system for hand gestures. In contrast to other approaches, the fusion of the two branches is not on the symbolic but on the sub-symbolic level by use of attention maps. This method is plausible from a cognitive point of view and facilitates the integration of entirely different modalities.