Least-Squares Fitting of Two 3-D Point Sets
IEEE Transactions on Pattern Analysis and Machine Intelligence
Conics-based stereo, motion estimation, and pose determination
International Journal of Computer Vision
Hierarchical geometric models for visible surface algorithms
Communications of the ACM
Introductory Techniques for 3-D Computer Vision
Introductory Techniques for 3-D Computer Vision
A Developer's Survey of Polygonal Simplification Algorithms
IEEE Computer Graphics and Applications
Perceptually guided simplification of lit, textured meshes
I3D '03 Proceedings of the 2003 symposium on Interactive 3D graphics
Gaze-directed Adaptive Rendering for Interacting with Virtual Space
VRAIS '96 Proceedings of the 1996 Virtual Reality Annual International Symposium (VRAIS 96)
Level of Detail for 3D Graphics
Level of Detail for 3D Graphics
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Patchlets: Representing Stereo Vision Data with Surface Elements
WACV-MOTION '05 Proceedings of the Seventh IEEE Workshops on Application of Computer Vision (WACV/MOTION'05) - Volume 1 - Volume 01
hMouse: Head Tracking Driven Virtual Computer Mouse
WACV '07 Proceedings of the Eighth IEEE Workshop on Applications of Computer Vision
Estimating face pose by facial asymmetry and geometry
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Rendering optimizations guided by head-pose estimates and their uncertainty
ISVC'05 Proceedings of the First international conference on Advances in Visual Computing
Acuity-matching resolution degradation through wavelet coefficient scaling
IEEE Transactions on Image Processing
Hi-index | 0.00 |
Immersive virtual environments with life-like interaction capabilities can provide a high fidelity view of the virtual world and seamless interaction methods to the user. High demanding requirements, however, raise many challenges in the development of sensing technologies and display systems. The focus of this study is on improving the performance of human–computer interaction by rendering optimizations guided by head pose estimates and their uncertainties. This work is part of a larger study currently being under investigation at NASA Ames, called “Virtual GloveboX” (VGX). VGX is a virtual simulator that aims to provide advanced training and simulation capabilities for astronauts to perform precise biological experiments in a glovebox aboard the International Space Station (ISS). Our objective is to enhance the virtual experience by incorporating information about the user’s viewing direction into the rendering process. In our system, viewing direction is approximated by estimating head orientation using markers placed on a pair of polarized eye-glasses. Using eye-glasses does not pose any constraints in our operational environment since they are an integral part of a stereo display used in VGX. During rendering, perceptual level of detail methods are coupled with head-pose estimation to improve the visual experience. A key contribution of our work is incorporating head pose estimation uncertainties into the level of detail computations to account for head pose estimation errors. Subject tests designed to quantify user satisfaction under different modes of operation indicate that incorporating uncertainty information during rendering improves the visual experience of the user.