Integrating multiple uncalibrated views for human 3D pose estimation

  • Authors:
  • Zibin Wang;Ronald Chung

  • Affiliations:
  • Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong;Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong

  • Venue:
  • ISVC'10 Proceedings of the 6th international conference on Advances in visual computing - Volume Part III
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We address the problem of how human pose in 3D can be estimated from video data. The use of multiple views has the potential of tackling self-occlusion of the human subject in any particular view, as well as of estimating the human pose more precisely. We propose a scheme of allowing multiple views to be put together naturally for determining human pose, allowing hypotheses of the body parts in each view to be pruned away efficiently through consistency check over all the views. The scheme relates the different views through a linear combination-like expression of all the image data, which captures the rigidity of the human subject in 3D. The scheme does not require thorough calibration of the cameras themselves nor the camera inter-geometry. A formulation is also introduced that expresses the multi-view scheme, as well as other constraints, in the pose estimation problem. A belief propagation approach is used to reach a final human pose under the formulation. Experimental results on in-house captured image data as well as publicly available benchmark datasets are shown to illustrate the performance of the system.