Modeling visual attention via selective tuning
Artificial Intelligence - Special volume on computer vision
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Digital Image Processing
Hierarchical Selectivity for Object-Based Visual Attention
BMCV '02 Proceedings of the Second International Workshop on Biologically Motivated Computer Vision
Object-based visual attention for computer vision
Artificial Intelligence
Occupancy grids: a probabilistic framework for robot perception and navigation
Occupancy grids: a probabilistic framework for robot perception and navigation
Robust Real-Time Face Detection
International Journal of Computer Vision
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Selective visual attention enables learning and recognition of multiple objects in cluttered scenes
Computer Vision and Image Understanding - Special issue: Attention and performance in computer vision
Editorial: From sensors to human spatial concepts
Robotics and Autonomous Systems
Biologically-inspired robotics
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Exploiting low-level image segmentation for object recognition
DAGM'06 Proceedings of the 28th conference on Pattern Recognition
Human detection using a mobile platform and novel features derived from a visual saliency mechanism
Image and Vision Computing
Probabilistic models for robot-based object segmentation
Robotics and Autonomous Systems
Indoor Mobile Robotics at Grima, PUC
Journal of Intelligent and Robotic Systems
Indoor scene recognition by a mobile robot through adaptive object detection
Robotics and Autonomous Systems
Hi-index | 0.00 |
In this paper, we tackle the problem of unsupervised selection and posterior recognition of visual landmarks in image sequences acquired by an indoor mobile robot. This is a highly valuable perceptual capability for a wide variety of robotic applications, in particular autonomous navigation. Our method combines a bottom-up data driven approach with top-down feedback provided by high level semantic representations. The bottom-up approach is based on three main mechanisms: visual attention, area segmentation, and landmark characterization. As there is no segmentation method that works properly in every situation, we integrate multiple segmentation algorithms in order to increase robustness of the approach. In terms of top-down feedback, this is provided by two information sources: (i) An estimation of the robot position that reduces the searching scope for potential matches with previously selected landmarks, (ii) A set of weights that, according to the results of previous recognitions, controls the influence of each segmentation algorithm in the recognition of each landmark. We test our approach with encouraging results in three datasets corresponding to real-world scenarios.