Integrating perceptual level of detail with head-pose estimation and its uncertainty

  • Authors:
  • Javier E. Martinez;Ali Erol;George Bebis;Richard Boyle;Xander Twombly

  • Affiliations:
  • University of Nevada, Computer Vision Laboratory, 89557, Reno, NV, USA;University of Nevada, Computer Vision Laboratory, 89557, Reno, NV, USA;University of Nevada, Computer Vision Laboratory, 89557, Reno, NV, USA;NASA Ames Research Center, BioVis Laboratory, 94035, Moffett Field, CA, USA;NASA Ames Research Center, BioVis Laboratory, 94035, Moffett Field, CA, USA

  • Venue:
  • Machine Vision and Applications
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Immersive virtual environments with life-like interaction capabilities can provide a high fidelity view of the virtual world and seamless interaction methods to the user. High demanding requirements, however, raise many challenges in the development of sensing technologies and display systems. The focus of this study is on improving the performance of human–computer interaction by rendering optimizations guided by head pose estimates and their uncertainties. This work is part of a larger study currently being under investigation at NASA Ames, called “Virtual GloveboX” (VGX). VGX is a virtual simulator that aims to provide advanced training and simulation capabilities for astronauts to perform precise biological experiments in a glovebox aboard the International Space Station (ISS). Our objective is to enhance the virtual experience by incorporating information about the user’s viewing direction into the rendering process. In our system, viewing direction is approximated by estimating head orientation using markers placed on a pair of polarized eye-glasses. Using eye-glasses does not pose any constraints in our operational environment since they are an integral part of a stereo display used in VGX. During rendering, perceptual level of detail methods are coupled with head-pose estimation to improve the visual experience. A key contribution of our work is incorporating head pose estimation uncertainties into the level of detail computations to account for head pose estimation errors. Subject tests designed to quantify user satisfaction under different modes of operation indicate that incorporating uncertainty information during rendering improves the visual experience of the user.