Vision-based hand pose estimation: A review

  • Authors:
  • Ali Erol;George Bebis;Mircea Nicolescu;Richard D. Boyle;Xander Twombly

  • Affiliations:
  • Computer Vision Laboratory, University of Nevada, Reno, NV 89557, USA;Computer Vision Laboratory, University of Nevada, Reno, NV 89557, USA;Computer Vision Laboratory, University of Nevada, Reno, NV 89557, USA;BioVis Laboratory, NASA Ames Research Center, Moffett Field, CA 94035, USA;BioVis Laboratory, NASA Ames Research Center, Moffett Field, CA 94035, USA

  • Venue:
  • Computer Vision and Image Understanding
  • Year:
  • 2007

Quantified Score

Hi-index 0.02

Visualization

Abstract

Direct use of the hand as an input device is an attractive method for providing natural human-computer interaction (HCI). Currently, the only technology that satisfies the advanced requirements of hand-based input for HCI is glove-based sensing. This technology, however, has several drawbacks including that it hinders the ease and naturalness with which the user can interact with the computer-controlled environment, and it requires long calibration and setup procedures. Computer vision (CV) has the potential to provide more natural, non-contact solutions. As a result, there have been considerable research efforts to use the hand as an input device for HCI. In particular, two types of research directions have emerged. One is based on gesture classification and aims to extract high-level abstract information corresponding to motion patterns or postures of the hand. The second is based on pose estimation systems and aims to capture the real 3D motion of the hand. This paper presents a literature review on the latter research direction, which is a very challenging problem in the context of HCI.