QuickTime VR: an image-based approach to virtual environment navigation
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
Plenoptic modeling: an image-based rendering system
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
Hands-free navigation in VR environments by tracking the head
International Journal of Human-Computer Studies
Plenoptic stitching: a scalable method for reconstructing 3D interactive walk throughs
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Catadioptric Omnidirectional Camera
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Rolling Motion Estimation for Mobile Robot by Using Omnidirectional Image Sensor HyperOmniVision
ICPR '96 Proceedings of the 1996 International Conference on Pattern Recognition (ICPR '96) Volume I - Volume 7270
Ego-Motion and Omnidirectional Cameras
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
Recognition of 3-D objects using the extended Gaussian image
IJCAI'81 Proceedings of the 7th international joint conference on Artificial intelligence - Volume 2
Spherical Binary Images Matching
ICCS '07 Proceedings of the 7th international conference on Computational Science, Part II
Hi-index | 0.00 |
In order to estimate a user's head pose at a relative large scale environment for virtual reality (VR) applications, multiple cameras set around him/her are used in conventional approaches, such as a motion capture. This paper proposes a method of estimating head pose from spherical images. A user wears a helmet on which a visual sensor is mounted and the head pose can be estimated by observing the fiducial markers put around him/her. Since a spherical image has a full view, our method can cope with a big head rotation motion compared with a normal camera. Since a head pose at every time is directly estimated from the observed markers, there is no accumulated errors in our method compared with a inertial sensor. Currently, an omnidirectional image sensor is used to acquire the most part of a spherical image in our experiment.