Integrating context-free and context-dependent attentional mechanisms for gestural object reference

  • Authors:
  • Gunther Heidemann;Robert Rae;Holger Bekel;Ingo Bax;Helge Ritter

  • Affiliations:
  • Bielefeld University, Neuroinformatics Group, Faculty of Technology, P.O. Box 10 01 31, 33501, Bielefeld, Germany;Bielefeld University, Neuroinformatics Group, Faculty of Technology, P.O. Box 10 01 31, 33501, Bielefeld, Germany;Bielefeld University, Neuroinformatics Group, Faculty of Technology, P.O. Box 10 01 31, 33501, Bielefeld, Germany;Bielefeld University, Neuroinformatics Group, Faculty of Technology, P.O. Box 10 01 31, 33501, Bielefeld, Germany;Bielefeld University, Neuroinformatics Group, Faculty of Technology, P.O. Box 10 01 31, 33501, Bielefeld, Germany

  • Venue:
  • Machine Vision and Applications
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a vision system for human-machine interaction based on a small wearable camera mounted on glasses. The camera views the area in front of the user, especially the hands. To evaluate hand movements for pointing gestures and to recognise object references, an approach to integrating bottom-up generated feature maps and top-down propagated recognition results is introduced. Modules for context-free focus of attention work in parallel with the hand gesture recognition. In contrast to other approaches, the fusion of the two branches is on the sub-symbolic level. This method facilitates both the integration of different modalities and the generation of auditory feedback.