Model-based Integration of Visual Cues for Hand Tracking

  • Authors:
  • Shan Lu;Gang Huang;Dimitris Samaras;Dimitris Metax

  • Affiliations:
  • -;-;-;-

  • Venue:
  • MOTION '02 Proceedings of the Workshop on Motion and Video Computing
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a model based approach to the integrationof multiple cues for tracking high degree of freedom articulatedmotions. We then apply it to the problem of handtracking using a single camera sequence. Hand trackingis particularly challenging because of occlusions, shadingvariations, and the high dimensionality of the motion. Thenovelty of our approach is in the combination of multiplesources of information which come from edges, opticalflow and shading information. In particular we introducein deformable model theory a generalized version of thegradient-based optical flow constraint, that includes shadingflow i.e., the variation of the shading of the object as itrotates with respect to the light source. This constraint unifiesthe shading and the optical flow constraints (it simplifiesto each one of them, when the other is not present). Ouruse of cue information from the entirety of the hand enablesus to track its complex articulated motion in the presenceof shading changes. Given the model-based formulation weuse shading when the optical flow constraint is violated dueto significantshading changes in a region. We use a forwardrecursive dynamic model to track the motion in response to3D data derived forces applied to the model. The hand ismodeled as a base link (palm) with five linked chains (fingers)while the allowable motion of the fingersis controlledby recursive dynamics constraints. Model driving forcesare generated from edges, optical flow and shading. Theeffectiveness of our approach is demonstrated with experimentson a number of different hand motions with shadingchanges, rotations and occlusions of significantparts of thehand.