A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Perceptually-Driven Simplification for Interactive Rendering
Proceedings of the 12th Eurographics Workshop on Rendering Techniques
Detail to attention: exploiting visual tasks for selective rendering
EGRW '03 Proceedings of the 14th Eurographics workshop on Rendering
A GPU based saliency map for high-fidelity selective rendering
AFRIGRAPH '06 Proceedings of the 4th international conference on Computer graphics, virtual reality, visualisation and interaction in Africa
A psychophysical study of fixation behavior in a computer game
Proceedings of the 5th symposium on Applied perception in graphics and visualization
Lighting and material of Halo 3
ACM SIGGRAPH 2008 Games
Depth-of-Field Blur Effects for First-Person Navigation in Virtual Environments
IEEE Computer Graphics and Applications
Real-Time Tracking of Visually Attended Objects in Virtual Environments and Its Application to LOD
IEEE Transactions on Visualization and Computer Graphics
Gaze behavior and visual attention model when turning in virtual environments
Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology
Attention prediction in egocentric video using motion and visual saliency
PSIVT'11 Proceedings of the 5th Pacific Rim conference on Advances in Image and Video Technology - Volume Part I
Gaze-Dependent depth-of-field effect rendering in virtual environments
SGDA'11 Proceedings of the Second international conference on Serious Games Development and Applications
Short paper: exploring the object relevance of a gaze animation model
EGVE - JVRC'11 Proceedings of the 17th Eurographics conference on Virtual Environments & Third Joint Virtual Reality
Hi-index | 0.01 |
This paper introduces a novel visual attention model to compute user's gaze position automatically, i.e. without using a gaze-tracking system. Our model is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute, in real-time, a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive process which takes place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines the bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We have conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with more than 100% of accuracy gained. This suggests that computing in realtime a gaze point in a 3D virtual environment is possible and is a valid approach as compared to object-based approaches.