HYPER: A New Approach for the Recognition and Positioning of Two-Dimensional Objects
IEEE Transactions on Pattern Analysis and Machine Intelligence
Features and objects in visual processing
Scientific American
Localizing Overlapping Parts by Searching the Interpretation Tree
IEEE Transactions on Pattern Analysis and Machine Intelligence
Object recognition by computer: the role of geometric constraints
Object recognition by computer: the role of geometric constraints
A head-eye system—analysis and design
CVGIP: Image Understanding - Special issue on purposive, qualitative, active vision
International Journal of Computer Vision
Using intermediate objects to improve the efficiency of visual search
International Journal of Computer Vision - Special issue on active vision II
Modeling visual attention via selective tuning
Artificial Intelligence - Special volume on computer vision
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
An Attentional Prototype for Early Vision
ECCV '92 Proceedings of the Second European Conference on Computer Vision
Data and Model-Driven Selection using Color Regions
ECCV '92 Proceedings of the Second European Conference on Computer Vision
Where to Look Next Using a Bayes Net: Incorporating Geometric Relations
ECCV '92 Proceedings of the Second European Conference on Computer Vision
A Visual Attention Operator Based on Morphological Models of Images and Maximum Likelihood Decision
Proceedings of the Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Biologically inspired Cartesian and non-Cartesian filters for attentional sequences
Pattern Recognition Letters
Computer Vision and Image Understanding - Special issue on event detection in video
Real-Time Gesture Recognition by Learning and Selective Control of Visual Interest Points
IEEE Transactions on Pattern Analysis and Machine Intelligence
Attention-Based Dynamic Visual Search Using Inner-Scene Similarity: Algorithms and Bounds
IEEE Transactions on Pattern Analysis and Machine Intelligence
APES: Attentively Perceiving Robot
Autonomous Robots
Computer Vision and Image Understanding - Special issue: Attention and performance in computer vision
Bayesian feature evaluation for visual saliency estimation
Pattern Recognition
An efficient algorithm for attention-driven image interpretation from segments
Pattern Recognition
General Highlight Detection in Sport Videos
MMM '09 Proceedings of the 15th International Multimedia Modeling Conference on Advances in Multimedia Modeling
Peripheral-foveal vision for real-time object recognition and tracking in video
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Computer Vision and Image Understanding - Special issue: Attention and performance in computer vision
Visual search in static and dynamic scenes using fine-grain top-down visual attention
ICVS'08 Proceedings of the 6th international conference on Computer vision systems
Inherent limitations of visual search and the role of inner-scene similarity
WAPCV'04 Proceedings of the Second international conference on Attention and Performance in Computational Vision
Hi-index | 0.14 |
A precise analysis of an entire image is computationally wasteful if one is interested in finding a target object located in a subregion of the image. A useful 驴attention strategy驴 can reduce the overall computation by carrying out fast but approximate image measurements and using their results to suggest a promising subregion. This paper proposes a maximum-likelihood attention mechanism that does this. The attention mechanism recognizes that objects are made of parts and that parts have different features. It works by proposing object part and image feature pairings which have the highest likelihood of coming from the target. The exact calculation of the likelihood as well as approximations are provided. The attention mechanism is adaptive, that is, its behavior adapts to the statistics of the image features. Experimental results suggest that, on average, the attention mechanism evaluates less than 2 percent of all part-feature pairs before selecting the actual object, showing a significant reduction in the complexity of visual search.