A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Evaluating image processing algorithms that predict regions of interest
Pattern Recognition Letters
Investigation of a sensorimotor system for saccadic scene analysis: an integrated approach
Proceedings of the fifth international conference on simulation of adaptive behavior on From animals to animats 5
Algorithms for Defining Visual Regions-of-Interest: Comparison with Eye Fixations
IEEE Transactions on Pattern Analysis and Machine Intelligence
Vision in natural and virtual environments
ETRA '02 Proceedings of the 2002 symposium on Eye tracking research & applications
On-road driver eye movement tracking using head-mounted devices
ETRA '02 Proceedings of the 2002 symposium on Eye tracking research & applications
Bottom-Up Visual Attention for Virtual Human Animation
CASA '03 Proceedings of the 16th International Conference on Computer Animation and Social Agents (CASA 2003)
A Principled Approach to Detecting Surprising Events in Video
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Memory representations in natural tasks
Journal of Cognitive Neuroscience
Using film cutting techniques in interface design
Human-Computer Interaction
A psychophysical study of fixation behavior in a computer game
Proceedings of the 5th symposium on Applied perception in graphics and visualization
Real-world vision: Selective perception and task
ACM Transactions on Applied Perception (TAP)
Modelling Spatio-Temporal Saliency to Predict Gaze Direction for Short Videos
International Journal of Computer Vision
Computational visual attention systems and their cognitive foundations: A survey
ACM Transactions on Applied Perception (TAP)
Relevance of Interest Points for Eye Position Prediction on Videos
ICVS '09 Proceedings of the 7th International Conference on Computer Vision Systems: Computer Vision Systems
Environment adapted active multi-focal vision system for object detection
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
A high-speed multi-GPU implementation of bottom-up attention using CUDA
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Online learning of task-driven object-based visual attention control
Image and Vision Computing
Gazing at games: using eye tracking to control virtual characters
ACM SIGGRAPH 2010 Courses
An empirical pipeline to derive gaze prediction heuristics for 3D action games
ACM Transactions on Applied Perception (TAP)
Perceptually guided high-fidelity rendering exploiting movement bias in visual attention
ACM Transactions on Applied Perception (TAP)
Attention direction in static and animated diagrams
Diagrams'10 Proceedings of the 6th international conference on Diagrammatic representation and inference
Saliency in motion: selective rendering of dynamic virtual environments
Proceedings of the 25th Spring Conference on Computer Graphics
On saliency, affect and focused attention
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Attention and selection in online choice tasks
UMAP'12 Proceedings of the 20th international conference on User Modeling, Adaptation, and Personalization
Hi-index | 0.00 |
Future interactive virtual environments will be “attention-aware,” capable of predicting, reacting to, and ultimately influencing the visual attention of their human operators. Before such environments can be realized, it is necessary to operationalize our understanding of the relevant aspects of visual perception, in the form of fully automated computational heuristics that can efficiently identify locations that would attract human gaze in complex dynamic environments. One promising approach to designing such heuristics draws on ideas from computational neuroscience. We compared several neurobiologically inspired heuristics with eye-movement recordings from five observers playing video games, and found that human gaze was better predicted by heuristics that detect outliers from the global distribution of visual features than by purely local heuristics. Heuristics sensitive to dynamic events performed best overall. Further, heuristic prediction power differed more between games than between different human observers. While other factors clearly also influence eye position, our findings suggest that simple neurally inspired algorithmic methods can account for a significant portion of human gaze behavior in a naturalistic, interactive setting. These algorithms may be useful in the implementation of interactive virtual environments, both to predict the cognitive state of human operators, as well as to effectively endow virtual agents in the system with humanlike visual behavior.