Human Pose Estimation in Vision Networks Via Distributed Local Processing and Nonparametric Belief Propagation

  • Authors:
  • Chen Wu;Hamid Aghajan

  • Affiliations:
  • Wireless Sensor Networks Lab Department of Electrical Engineering, Stanford University, Stanford, CA, 94305;Wireless Sensor Networks Lab Department of Electrical Engineering, Stanford University, Stanford, CA, 94305

  • Venue:
  • ACIVS '08 Proceedings of the 10th International Conference on Advanced Concepts for Intelligent Vision Systems
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we propose a self-initialized method for human pose estimation from multiple cameras. A graphical model for the articulated body is defined through explicit kinematic and structural constraints, which allows for any plausible body configuration and avoids learning the joint distributions from training data. Nonparametric belief propagation (NBP) is used to infer the marginal distributions. However, to address the problem of the inference being trapped in local optima and to achieve fast convergence, a reasonably good pose initialization is required. A bottom-up approach is used to detect body parts distributedly in local processing of each camera. 3D Geometry correspondence relates 2D camera observations spatially to generate a rough pose estimation to initialize node marginal distribution. The marginal distributions are then refined through NBP. Estimated 3D body joint positions are quantitatively analyzed with motion capture data.