Visual capture and understanding of hand pointing actions in a 3-D environment

  • Authors:
  • C. Colombo;A. Del Bimbo;A. Valli

  • Affiliations:
  • Dipt. di Sistemi e Informatica, Univ. di Firenze, Italy;-;-

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a nonintrusive system based on computer vision for human-computer interaction in three-dimensional (3-D) environments controlled by hand pointing gestures. Users are allowed to walk around in a room and manipulate information displayed on its walls by using their own hands as pointing devices. Once captured and tracked in real-time using stereo vision, hand pointing gestures are remapped onto the current point of interest, thus reproducing in an advanced interaction scenario the "drag and click" behavior of traditional mice. The system, called PointAt (patent pending), enjoys a careful modeling of both user and optical subsystem, and visual algorithms for self-calibration and adaptation to both user peculiarities and environmental changes. The concluding sections provide an insight into system characteristics, performance, and relevance for real applications.