Modeling visual attention via selective tuning
Artificial Intelligence - Special volume on computer vision
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Data- and Model-Driven Gaze Control for an Active-Vision System
IEEE Transactions on Pattern Analysis and Machine Intelligence
Object-based visual attention for computer vision
Artificial Intelligence
Goal-directed search with a top-down modulated computational attention system
PR'05 Proceedings of the 27th DAGM conference on Pattern Recognition
Modeling attention: from computational neuroscience to computer vision
WAPCV'04 Proceedings of the Second international conference on Attention and Performance in Computational Vision
Goal-directed search with a top-down modulated computational attention system
PR'05 Proceedings of the 27th DAGM conference on Pattern Recognition
Incrementally biasing visual search using natural language input
Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
Hi-index | 0.00 |
We present a new, sophisticated algorithm to select suitable training images for our biologically motivated attention system VOCUS. The system detects regions of interest depending on bottom-up (scene-dependent) and top-down (target-specific) cues. The top-down cues are learned by VOCUS from one or several training images. We show that our algorithm chooses a subset of the training set that outperforms both the selection of one single image as well as simply using all available images for learning. With this algorithm, VOCUS is able to quickly and robustly detect targets in numerous real-world scenes.