Learning metric-topological maps for indoor mobile robot navigation
Artificial Intelligence
A Probabilistic Approach to Concurrent Mapping and Localization for Mobile Robots
Machine Learning - Special issue on learning in autonomous robots
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Monte Carlo localization: efficient position estimation for mobile robots
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Robust Monte Carlo localization for mobile robots
Artificial Intelligence
FastSLAM: a factored solution to the simultaneous localization and mapping problem
Eighteenth national conference on Artificial intelligence
Models of bottom-up and top-down visual attention
Models of bottom-up and top-down visual attention
Context-based vision system for place and object recognition
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Localization Based on Building Recognition
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops - Volume 03
Real-time Localization in Outdoor Environments using Stereo Vision and Inexpensive GPS
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 03
Rapid Biologically-Inspired Scene Classification Using Features Shared with Visual Attention
IEEE Transactions on Pattern Analysis and Machine Intelligence
Image Based Localization in Urban Environments
3DPVT '06 Proceedings of the Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)
Navigating by mind and by body
Spatial cognition III
Is bottom-up attention useful for object recognition?
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Robot steering with spectral image information
IEEE Transactions on Robotics
Vision-based global localization and mapping for mobile robots
IEEE Transactions on Robotics
Coarse-to-fine vision-based localization by indexing scale-Invariant features
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Computational visual attention systems and their cognitive foundations: A survey
ACM Transactions on Applied Perception (TAP)
Learning pre-attentive driving behaviour from holistic visual features
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part VI
Global localization with non-quantized local image features
Robotics and Autonomous Systems
Biologically inspired task oriented gist model for scene classification
Computer Vision and Image Understanding
Recognition of two-dimensional representation of urban environment for autonomous flying agents
Expert Systems with Applications: An International Journal
Dynamic saliency models and human attention: a comparative study on videos
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part III
Biological models for active vision: towards a unified architecture
ICVS'13 Proceedings of the 9th international conference on Computer Vision Systems
A survey on team strategies in robot soccer: team strategies and role description
Artificial Intelligence Review
Saliency detection based on integrated features
Neurocomputing
Hi-index | 0.00 |
We present a robot localization system using biologically inspired vision. Our system models two extensively studied human visual capabilities: 1) extracting the "gist" of a scene to produce a coarse localization hypothesis and 2) refining it by locating salient landmark points in the scene. Gist is computed here as a holistic statistical signature of the image, thereby yielding abstract scene classification and layout. Saliency is computed as a measure of interest at every image location, which efficiently directs the time-consuming landmark-identification process toward the most likely candidate locations in the image. The gist features and salient regions are then further processed using aMonte Carlo localization algorithm to allow the robot to generate its position. We test the system in three different outdoor environments--building complex (38.4m × 54.86 m area, 13 966 testing images), vegetation-filled park (82.3m × 109.73m area, 26 397 testing images), and openfield park (137.16m × 178.31m area, 34 711 testing images)--each with its own challenges. The system is able to localize, on average, within 0.98, 2.63, and 3.46 m, respectively, even with multiple kidnapped-robot instances.