A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Perceptually-Driven Simplification for Interactive Rendering
Proceedings of the 12th Eurographics Workshop on Rendering Techniques
Detail to attention: exploiting visual tasks for selective rendering
EGRW '03 Proceedings of the 14th Eurographics workshop on Rendering
A GPU based saliency map for high-fidelity selective rendering
AFRIGRAPH '06 Proceedings of the 4th international conference on Computer graphics, virtual reality, visualisation and interaction in Africa
Computational mechanisms for gaze direction in interactive visual environments
Proceedings of the 2006 symposium on Eye tracking research & applications
Depth-of-Field Blur Effects for First-Person Navigation in Virtual Environments
IEEE Computer Graphics and Applications
Real-Time Tracking of Visually Attended Objects in Virtual Environments and Its Application to LOD
IEEE Transactions on Visualization and Computer Graphics
Proceedings of the 17th ACM Symposium on Virtual Reality Software and Technology
Attention prediction in egocentric video using motion and visual saliency
PSIVT'11 Proceedings of the 5th Pacific Rim conference on Advances in Image and Video Technology - Volume Part I
Hi-index | 0.00 |
In this paper we analyze and try to predict the gaze behavior of users navigating in virtual environments. We focus on first-person navigation in virtual environments which involves forward and backward motions on a ground-surface with turns toward the left or right. We found that gaze behavior in virtual reality, with input devices like mice and keyboards, is similar to the one observed in real life. Participants anticipated turns as in real life conditions, i.e. when they can actually move their body and head. We also found influences of visual occlusions and optic flow similar to the ones reported in existing literature on real navigations. Then, we propose three simple gaze prediction models taking as input: (1) the motion of the user as given by the rotation velocity of the camera on the yaw axis (considered here as the virtual heading direction), and/or (2) the optic flow on screen. These models were tested with data collected in various virtual environments. Results show that these models can significantly improve the prediction of gaze position on screen, especially when turning, in the virtual environment. The model based on rotation velocity of the camera seems to be the best trade-off between simplicity and efficiency. We suggest that these models could be used in several interactive applications using gaze point as input. They could also be used as a new top-down component in any existing visual attention model.