Vision based hand gesture interfaces for wearable computing and virtual environments

  • Authors:
  • Mathias Kolsch;Matthew Turk

  • Affiliations:
  • -;-

  • Venue:
  • Vision based hand gesture interfaces for wearable computing and virtual environments
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Current user interfaces are unsuited to harness the full power of computers. Mobile devices like cell phones and technologies such as virtual reality demand a richer set of interaction modalities to overcome situational constraints and to fully leverage human expressiveness. Hand gesture recognition lets humans use their most versatile instrument—their hands—in more natural and effective ways than currently possible. While most gesture recognition gear is cumbersome and expensive, gesture recognition with computer vision is non-invasive and more flexible. Yet, it faces difficulties due to the hand's complexity, lighting conditions, background artifacts, and user differences. The contributions of this dissertation have helped to make computer vision a viable technology to implement hand gesture recognition for user interface purposes. To begin with, we investigated arm postures in front of the human body in order to avoid anthropometrically unfavorable gestures and to establish a “comfort zone” in which humans prefer to operate their hands. The dissertation's main contribution is “HandVu,” a computer vision system that recognizes hand gestures in real-time. To achieve this, it was necessary to advance the reliability of hand detection to allow for robust system initialization in most environments and lighting conditions. After initialization, a “Flock of Features” exploits optical flow and color information to track the hand's location despite rapid movements and concurrent finger articulations. Lastly, robust appearance-based recognition of key hand configurations completes HandVu and facilitates input of discrete commands to applications. We demonstrate the feasibility of computer vision as the sole input modality to a wearable computer, providing “deviceless” interaction capabilities. We also present new and improved interaction techniques in the context of a multimodal interface to a mobile augmented reality system. HandVu allows us to exploit hand gesture capabilities that have previously been untapped, for example, in areas where data gloves are not a viable option. This dissertation's goal is to contribute to the mosaic of available interface modalities and to widen the human-computer interface channel. Leveraging more of our expressiveness and our physical abilities offers new and advantageous ways to communicate with machines.