A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Where to Look? Automating Attending Behaviors of Virtual Human Characters
Autonomous Agents and Multi-Agent Systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Level of Detail for 3D Graphics
Level of Detail for 3D Graphics
Evaluation of tone mapping operators using a High Dynamic Range display
ACM SIGGRAPH 2005 Papers
A model of attention and interest using Gaze behavior
Lecture Notes in Computer Science
Pattern Recognition
Providing expressive gaze to virtual animated characters in interactive applications
Computers in Entertainment (CIE) - SPECIAL ISSUE: Media Arts
Communicating Eye Gaze across a Distance without Rooting Participants to the Spot
DS-RT '08 Proceedings of the 2008 12th IEEE/ACM International Symposium on Distributed Simulation and Real-Time Applications
DS-RT '08 Proceedings of the 2008 12th IEEE/ACM International Symposium on Distributed Simulation and Real-Time Applications
VR '09 Proceedings of the 2009 IEEE Virtual Reality Conference
Natural Eye Motion Synthesis by Modeling Gaze-Head Coupling
VR '09 Proceedings of the 2009 IEEE Virtual Reality Conference
Simulating gaze attention behaviors for crowds
Computer Animation and Virtual Worlds - CASA' 2009 Special Issue
Visual attention and eye gaze during multiparty conversations with distractions
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Short paper: exploring the object relevance of a gaze animation model
EGVE - JVRC'11 Proceedings of the 17th Eurographics conference on Virtual Environments & Third Joint Virtual Reality
Hi-index | 0.00 |
Complex interactions occur in virtual reality systems, requiring the modelling of next-generation attention models to obtain believable virtual human animations. This paper presents a saliency model that is neither domain nor task specific, which is used to animate the gaze of virtual characters. A critical question is addressed: What types of saliency attract attention in virtual environments and how can they be weighted to drive an avatar's gaze? Saliency effects were measured as a function of their total frequency. Scores were then generated for each object in the field of view within each frame to determine the most salient object within the virtual environment. This paper compares the resulting saliency gaze model to tracked gaze, in which avatars' eyes are controlled by head-mounted mobile eye-trackers worn by human subjects, random gaze model informed by head-orientation for saccade generation, and static gaze featuring non-moving centered eyes. Results from the evaluation experiment and graphical analysis demonstrate a promising saliency gaze model that is not just believable and realistic but also target-relevant and adaptable to varying tasks. Furthermore, the saliency model does not use any prior knowledge of the content or description of the virtual scene.