A real-time visual attention model for predicting gaze point during first-person exploration of virtual environments

  • Authors:
  • Sébastien Hillaire;Anatole Lécuyer;Tony Regia-Corte;Rémi Cozot;Jérome Royan;Gaspard Breton

  • Affiliations:
  • Orange Labs / INRIA;INRIA;INRIA;INRIA / University of Rennes;Orange Labs;Orange Labs

  • Venue:
  • Proceedings of the 17th ACM Symposium on Virtual Reality Software and Technology
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper introduces a novel visual attention model to compute user's gaze position automatically, i.e. without using a gaze-tracking system. Our model is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute, in real-time, a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive process which takes place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines the bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We have conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with more than 100% of accuracy gained. This suggests that computing in realtime a gaze point in a 3D virtual environment is possible and is a valid approach as compared to object-based approaches.