Hand tracking for human-computer interaction with Graylevel VisualGlove: turning back to the simple way

  • Authors:
  • Giancarlo Iannizzotto;Massimo Villari;Lorenzo Vita

  • Affiliations:
  • University of Messina, C.da Papardo, Salita Sperone, Messina, Italy;University of Messina, C.da Papardo, Salita Sperone, Messina, Italy;University of Catania, Viale A. Doria 6, Catania, Italy

  • Venue:
  • Proceedings of the 2001 workshop on Perceptive user interfaces
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent developments in the manufacturing and marketing of low power-consumption computers, small enough to be "worn" by users and remain almost invisible, have reintroduced the problem of overcoming the outdated paradigm of human-computer interaction based on use of a keyboard and a mouse. Approaches based on visual tracking seem to be the most promising, as they do not require any additional devices (gloves, etc.) and can be implemented with off-the-shelf devices such as webcams. Unfortunately, extremely variable lighting conditions and the high degree of computational complexity of most of the algorithms available make these techniques hard to use in systems where CPU power consumption is a major issue (e.g. wearable computers) and in situations where lighting conditions are critical (outdoors, in the dark, etc.). This paper describes the work carried out at VisiLAB at the University of Messina as part of the VisualGlove Project to develop a real-time, vision-based device able to operate as a substitute for the mouse and other similar input devices. It is able to operate in a wide range of lighting conditions, using a low-cost webcam and running on an entry-level PC. As explained in detail below, particular care has been taken to reduce computational complexity, in the attempt to reduce the amount of resources needed for the whole system to work.