Gaze behavior and visual attention model when turning in virtual environments

  • Authors:
  • Sébastien Hillaire;Anatole Lécuyer;Gaspard Breton;Tony Regia Corte

  • Affiliations:
  • Orange Labs, INRIA, INSA;INRIA;Orange Labs;INRIA

  • Venue:
  • Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we analyze and try to predict the gaze behavior of users navigating in virtual environments. We focus on first-person navigation in virtual environments which involves forward and backward motions on a ground-surface with turns toward the left or right. We found that gaze behavior in virtual reality, with input devices like mice and keyboards, is similar to the one observed in real life. Participants anticipated turns as in real life conditions, i.e. when they can actually move their body and head. We also found influences of visual occlusions and optic flow similar to the ones reported in existing literature on real navigations. Then, we propose three simple gaze prediction models taking as input: (1) the motion of the user as given by the rotation velocity of the camera on the yaw axis (considered here as the virtual heading direction), and/or (2) the optic flow on screen. These models were tested with data collected in various virtual environments. Results show that these models can significantly improve the prediction of gaze position on screen, especially when turning, in the virtual environment. The model based on rotation velocity of the camera seems to be the best trade-off between simplicity and efficiency. We suggest that these models could be used in several interactive applications using gaze point as input. They could also be used as a new top-down component in any existing visual attention model.