A saliency-based method of simulating visual attention in virtual scenes

  • Authors:
  • Oyewole Oyekoya;William Steptoe;Anthony Steed

  • Affiliations:
  • University College London;University College London;University College London

  • Venue:
  • Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Complex interactions occur in virtual reality systems, requiring the modelling of next-generation attention models to obtain believable virtual human animations. This paper presents a saliency model that is neither domain nor task specific, which is used to animate the gaze of virtual characters. A critical question is addressed: What types of saliency attract attention in virtual environments and how can they be weighted to drive an avatar's gaze? Saliency effects were measured as a function of their total frequency. Scores were then generated for each object in the field of view within each frame to determine the most salient object within the virtual environment. This paper compares the resulting saliency gaze model to tracked gaze, in which avatars' eyes are controlled by head-mounted mobile eye-trackers worn by human subjects, random gaze model informed by head-orientation for saccade generation, and static gaze featuring non-moving centered eyes. Results from the evaluation experiment and graphical analysis demonstrate a promising saliency gaze model that is not just believable and realistic but also target-relevant and adaptable to varying tasks. Furthermore, the saliency model does not use any prior knowledge of the content or description of the virtual scene.