On the relative complexity of active vs. passive visual search
International Journal of Computer Vision
International Journal of Computer Vision
Control of selective perception using Bayes nets and decision theory
International Journal of Computer Vision - Special issue on active vision II
Using intermediate objects to improve the efficiency of visual search
International Journal of Computer Vision - Special issue on active vision II
An active vision architecture based on iconic representations
Artificial Intelligence - Special volume on computer vision
Modeling visual attention via selective tuning
Artificial Intelligence - Special volume on computer vision
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Maximum-Likelihood Strategy for Directing Attention during Visual Search
IEEE Transactions on Pattern Analysis and Machine Intelligence
Models of bottom-up and top-down visual attention
Models of bottom-up and top-down visual attention
Hi-index | 0.00 |
This work focuses on inner-scene objects similarity as an information source for directing attention and for speeding-up visual search performed by artificial vision systems. A scalar measure (similar to Kolmogorov's ε-covering of metric spaces) is suggested for quantifying how much a visual search task can benefit from this source of information. The measure provided is algorithm independent, providing an inherent measure for tasks' difficulty, and can be also used as a predictor for search performance. We show that this measure is a lower bound on all search algorithms' performance and provide a simple algorithm that this measure bounds its performance from above. Since calculating a metric cover is NP-hard, we use both a heuristic and a 2-approximation algorithm for estimating it, and test the validity of our theorem on some experimental search tasks. This work can be considered as an attempt to quantify Duncan and Humphreys' similarity theory [5].