Representation and matching of articulated shapes

  • Authors:
  • Jiayong Zhang;Robert Collins;Yanxi Liu

  • Affiliations:
  • The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA;The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA;The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA

  • Venue:
  • CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the problem of localizing the articulated and deformable shape of a walking person in a single view. We represent the non-rigid 2D body contour by a Bayesian graphical model whose nodes correspond to point positions along the contour. The deformability of the model is constrained by learned priors corresponding to two basic mechanisms: local non-rigid deformation, and rotation motion of the joints. Four types of image cues are combined to relate the model configuration to the observed image, including edge gradient map, foreground/background mask, skin color mask, and appearance consistency constraints. The constructed Bayes network is sparse and chain-like, enabling efficient spatial inference through Sequential Monte Carlo sampling methods. We evaluate the performance of the model on images taken in cluttered, outdoor scenes. The utility of each image cue is also empirically explored.