Navigating in virtual environments using a vision-based interface

  • Authors:
  • Konrad Tollmar;David Demirdjian;Trevor Darrell

  • Affiliations:
  • Artificial Intilligens, Cambridge, MA;Artificial Intilligens, Cambridge, MA;Artificial Intilligens, Cambridge, MA

  • Venue:
  • Proceedings of the third Nordic conference on Human-computer interaction
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Interacting and navigating virtual environments usually requires a wired interface, game console, or keyboard. The advent of perceptual interface techniques allows a new option: the passive and untethered sensing of users' pose and gesture to allow them to maneuver through and manipulate virtual worlds. We describe new algorithms for interacting with 3-D environments using real-time articulated body tracking with standard cameras and personal computers. Our method is based on rigid stereo-motion estimation algorithms and can accurately track upper body pose in real-time. With our tracking system users can navigate virtual environments using 3-D gesture and body poses. We analyze the space of possible perceptual interface abstractions for full-body navigation, and present a prototype system based on these results. We finally describe an initial evaluation of our prototype system with users guiding avatars through a series of 3-D virtual game worlds.