Modeling visual attention via selective tuning
Artificial Intelligence - Special volume on computer vision
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Image Field Categorization and Edge/Corner Detection from Gradient Covariance
IEEE Transactions on Pattern Analysis and Machine Intelligence
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Evaluation of selective attention under similarity transformations
Computer Vision and Image Understanding - Special issue: Attention and performance in computer vision
Visual Selective Attention Model for Robot Vision
LARS '08 Proceedings of the 2008 IEEE Latin American Robotic Symposium
Towards standardization of metrics for evaluation of artificial visual attention
Proceedings of the 10th Performance Metrics for Intelligent Systems Workshop
Image retrieval by content based on a visual attention model and genetic algorithms
SBIA'12 Proceedings of the 21st Brazilian conference on Advances in Artificial Intelligence
Hi-index | 0.00 |
The computational models of visual attention, originally proposed as cognitive models of human attention, nowadays are being used as front-ends to some robotic vision systems, like automatic object recognition and landmark detection. However, these kinds of applications have different requirements from those originally proposed. More specifically, a robotic vision system must be relatively insensitive to 2D similarity transformations of the image, as in-plane translations, rotations, reflections, and scales. In this paper several experiments with two visual attention models publicly available are described. The results show that the best known model, called NVT, is extremely sensitive to these 2D similarity transformations. Therefore, a new visual attention model, called NLOOK, is proposed and validated with the same invariance criteria, and the results show that NLOOK is less sensitive to these kind of transformations than the other two models. Besides, NLOOK can select better fixations according to a redundancy criterion. Thus, the proposed model is an excellent tool to be used in robot vision systems.