Evaluation of visual attention models under 2D similarity transformations

  • Authors:
  • Milton Roberto Heinen;Paulo Martins Engel

  • Affiliations:
  • UFRGS -- Informatics Institute, Porto Alegre, RS, Brazil;UFRGS -- Informatics Institute, Porto Alegre, RS, Brazil

  • Venue:
  • Proceedings of the 2009 ACM symposium on Applied Computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The computational models of visual attention, originally proposed as cognitive models of human attention, nowadays are being used as front-ends to some robotic vision systems, like automatic object recognition and landmark detection. However, these kinds of applications have different requirements from those originally proposed. More specifically, a robotic vision system must be relatively insensitive to 2D similarity transformations of the image, as in-plane translations, rotations, reflections, and scales. In this paper several experiments with two visual attention models publicly available are described. The results show that the best known model, called NVT, is extremely sensitive to these 2D similarity transformations. Therefore, a new visual attention model, called NLOOK, is proposed and validated with the same invariance criteria, and the results show that NLOOK is less sensitive to these kind of transformations than the other two models. Besides, NLOOK can select better fixations according to a redundancy criterion. Thus, the proposed model is an excellent tool to be used in robot vision systems.