Modeling visual attention via selective tuning
Artificial Intelligence - Special volume on computer vision
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Attentional scene segmentation: integrating depth and motion
Computer Vision and Image Understanding
Data- and Model-Driven Gaze Control for an Active-Vision System
IEEE Transactions on Pattern Analysis and Machine Intelligence
ICVS '01 Proceedings of the Second International Workshop on Computer Vision Systems
Attentional Strategies for Object Recognition
IWANN '99 Proceedings of the International Work-Conference on Artificial and Natural Neural Networks: Foundations and Tools for Neural Modeling
Robust Real-Time Face Detection
International Journal of Computer Vision
A context-dependent attention system for a social robot
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Computational visual attention systems and their cognitive foundations: A survey
ACM Transactions on Applied Perception (TAP)
Local energy saliency for bottom-up visual attention
VIIP '07 The Seventh IASTED International Conference on Visualization, Imaging and Image Processing
Learning what matters: combining probabilistic models of 2D and 3D saliency cues
ICVS'11 Proceedings of the 8th international conference on Computer vision systems
Depth matters: influence of depth cues on visual saliency
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part II
Hi-index | 0.00 |
In this paper, we present a new bimodal attention system for robotic applications capable of processing data from different sensor modes simultaneously. Considering several sensor modalities is an obvious approach to regard a variety of object properties. Nevertheless, conventional attention systems only regard the processing of camera images. In contrast to these systems, the input data to our system are provided by a bimodal 3D laser scanner, mounted on top of an autonomous mobile robot. In a single 3D scan pass, the scanner yields range as well as reflectance data. Both data modes are illumination independent, yielding a robust approach that enables all day operation. Data from both laser modes are fed into our attention system built on principles of one of the standard models of visual attention by Koch and Ullman. The system computes conspicuities of both modes in parallel and fuses them into one saliency map. The focus of attention is directed to the most salient points in this map sequentially. We present results on recorded scans of indoor and outdoor scenes showing the respective advantages of the sensor modalities enabling the mode-specific detection of different object properties. Furthermore, we show as an application of the attention system the recognition of objects for building semantic 3D maps of the robot's environment.