A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
2006 Special Issue: Modeling attention to salient proto-objects
Neural Networks
Applying computational tools to predict gaze direction in interactive visual environments
ACM Transactions on Applied Perception (TAP)
A high-speed multi-GPU implementation of bottom-up attention using CUDA
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Context-Based scene recognition using bayesian networks with scale-invariant feature transform
ACIVS'06 Proceedings of the 8th international conference on Advanced Concepts For Intelligent Vision Systems
Goal-directed search with a top-down modulated computational attention system
PR'05 Proceedings of the 27th DAGM conference on Pattern Recognition
Autonomous switching of top-down and bottom-up attention selection for vision guided mobile robots
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Autonomous behavior-based switched top-down and bottom-up visual attention for mobile robots
IEEE Transactions on Robotics
Hi-index | 0.00 |
A biologically inspired foveated attention system in an object detection scenario is proposed. Thereby, a high-performance active multi-focal camera system imitates visual behaviors such as scan, saccade and fixation. Bottom-up attention uses wide-angle stereo data to select a sequence of fixation points in the peripheral field of view. Successive saccade and fixation of high foveal resolution using a telephoto camera enables high accurate object recognition. Once an object is recognized as target object, the bottom-up attention model is adapted to the current environment, using the top-down information extracted from this target object. The bottom-up attention model and the object recognition algorithm based on SIFT are implemented using CUDA technology on Graphics Processing Units (GPUs), which highly accelerates image processing. In the experimental evaluation, all the target objects were detected in different backgrounds. Evident improvements in accuracy, flexibility and efficiency are achieved.