Multi-view Appearance-based 3D Hand Pose Estimation

  • Authors:
  • Haiying Guan;Jae Sik Chang;Longbin Chen;Rogerio S. Feris;Matthew Turk

  • Affiliations:
  • University of California, Santa Barbara, CA, USA;University of California, Santa Barbara, CA, USA;University of California, Santa Barbara, CA, USA;University of California, Santa Barbara, CA, USA;University of California, Santa Barbara, CA, USA

  • Venue:
  • CVPRW '06 Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe a novel approach to appearance-based hand pose estimation which relies on multiple cameras to improve accuracy and resolve ambiguities caused by selfocclusions. Rather than estimating 3D geometry as most previous multi-view imaging systems, our approach uses multiple views to extend current exemplar-based methods that estimate hand pose by matching a probe image with a large discrete set of labeled hand pose images. We formulate the problem in a MAP (maximum a posteriori) framework, where the information from multiple cameras is fused to provide reliable hand pose estimation. Our quantitative experimental results show that correct estimation rate is much higher using our multi-view approach than using a single-view approach.