Inherent limitations of visual search and the role of inner-scene similarity

  • Authors:
  • Tamar Avraham;Michael Lindenbaum

  • Affiliations:
  • Computer Science Department, Technion, Haifa, Israel;Computer Science Department, Technion, Haifa, Israel

  • Venue:
  • WAPCV'04 Proceedings of the Second international conference on Attention and Performance in Computational Vision
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

This work focuses on inner-scene objects similarity as an information source for directing attention and for speeding-up visual search performed by artificial vision systems. A scalar measure (similar to Kolmogorov's ε-covering of metric spaces) is suggested for quantifying how much a visual search task can benefit from this source of information. The measure provided is algorithm independent, providing an inherent measure for tasks' difficulty, and can be also used as a predictor for search performance. We show that this measure is a lower bound on all search algorithms' performance and provide a simple algorithm that this measure bounds its performance from above. Since calculating a metric cover is NP-hard, we use both a heuristic and a 2-approximation algorithm for estimating it, and test the validity of our theorem on some experimental search tasks. This work can be considered as an attempt to quantify Duncan and Humphreys' similarity theory [5].