An object-oriented approach using a top-down and bottom-up process for manipulative action recognition

  • Authors:
  • Zhe Li;Jannik Fritsch;Sven Wachsmuth;Gerhard Sagerer

  • Affiliations:
  • Bielefeld University, Bielefeld, Germany;Bielefeld University, Bielefeld, Germany;Bielefeld University, Bielefeld, Germany;Bielefeld University, Bielefeld, Germany

  • Venue:
  • DAGM'06 Proceedings of the 28th conference on Pattern Recognition
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Different from many gesture-based human-robot interaction applications, which focused on the recognition of the interactional or the pointing gestures, this paper proposes a vision-based method for manipulative gesture recognition aiming to achieve natural, proactive, and non-intrusive interaction between humans and robots. The main contributions of the paper are an object-centered scheme for the segmentation and characterization of hand trajectory information, the use of particle filtering methods for an action primitive spotting, and the tight coupling of bottom-up and top-down processing that realizes a task-driven attention filter for low-level recognition steps. In contrast to purely trajectory based techniques, the presented approach is called object-oriented w.r.t. two different aspects: it is object-centered in terms of trajectory features that are defined relative to an object, and it uses object-specific models for action primitives. The system has a two-layer structure recognizing both the HMM-modeled manipulative primitives and the underlying task characterized by the manipulative primitive sequence. The proposed top-down and bottom-up mechanism between the two layers decreases the image processing load and improves the recognition rate.