Machine vision
The computation of optical flow
ACM Computing Surveys (CSUR)
Modeling visual attention via selective tuning
Artificial Intelligence - Special volume on computer vision
The interactive museum tour-guide robot
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Identifying fixations and saccades in eye-tracking protocols
ETRA '00 Proceedings of the 2000 symposium on Eye tracking research & applications
A Context-Dependent Attention System for a Social Robot
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Causal saliency effects during natural vision
Proceedings of the 2006 symposium on Eye tracking research & applications
Computer Vision and Image Understanding - Special issue: Attention and performance in computer vision
Evaluation of selective attention under similarity transformations
Computer Vision and Image Understanding - Special issue: Attention and performance in computer vision
Computation for metaphors, analogy, and agents
Towards Learning by Interacting
Creating Brain-Like Intelligence
A network of integrate and fire neurons for visual selection
Neurocomputing
Chaotic phase synchronization for visual selection
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Stability and sensitivity of bottom-up visual attention for dynamic scene analysis
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
A probabilistic model of overt visual attention for cognitive robots
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Modelling salient visual dynamics in videos
Multimedia Tools and Applications
Hi-index | 0.00 |
Robots often incorporate computational models of visual attention to streamline processing. Even though the number of visual attention systems employed on robots has increased dramatically in recent years, the evaluation of these systems has remained primarily qualitative and subjective. We introduce quantitative methods for evaluating computational models of visual attention by direct comparison with gaze trajectories acquired from humans. In particular, we focus on the need for metrics based not on distances within the image plane, but that instead operate at the level of underlying features. We present a framework, based on dimensionality-reduction over the features of human gaze trajectories, that can simultaneously be used for both optimizing a particular computational model of visual attention and for evaluating its performance in terms of similarity to human behavior. We use this framework to evaluate the Itti et al. (1998) model of visual attention, a computational model that serves as the basis for many robotic visual attention systems.