Image and Vision Computing
Two novel real-time local visual features for omnidirectional vision
Pattern Recognition
Methods for combined monocular and stereo mobile robot localization
ICPR'10 Proceedings of the 20th International conference on Recognizing patterns in signals, speech, images, and videos
PicSOM experiments in ImageCLEF robot vision
ICPR'10 Proceedings of the 20th International conference on Recognizing patterns in signals, speech, images, and videos
Combining image invariant features and clustering techniques for visual place classification
ICPR'10 Proceedings of the 20th International conference on Recognizing patterns in signals, speech, images, and videos
An extended-HCT semantic description for visual place recognition
International Journal of Robotics Research
Global localization with non-quantized local image features
Robotics and Autonomous Systems
Multi-cue based place learning for mobile robot navigation
AIS'12 Proceedings of the Third international conference on Autonomous and Intelligent Systems
Hierarchical Classifiers for Robust Topological Robot Localization
Journal of Intelligent and Robotic Systems
Object templates for visual place categorization
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part IV
Bubble space and place representation in topological maps
International Journal of Robotics Research
Integrating cue descriptors in bubble space for place recognition
ICVS'13 Proceedings of the 9th international conference on Computer Vision Systems
A MapReduce-based indoor visual localization system using affine invariant features
Computers and Electrical Engineering
Long-term mapping and localization using feature stability histograms
Robotics and Autonomous Systems
Hi-index | 0.00 |
Two key competencies for mobile robotic systems are localization and semantic context interpretation. Recently, vision has become the modality of choice for these problems as it provides richer and more descriptive sensory input. At the same time, designing and testing vision-based algorithms still remains a challenge, as large amounts of carefully selected data are required to address the high variability of visual information. In this paper we present a freely available database which provides a large-scale, flexible testing environment for vision-based topological localization and semantic knowledge extraction in robotic systems. The database contains 76 image sequences acquired in three different indoor environments across Europe. Acquisition was performed with the same perspective and omnidirectional camera setup, in rooms of different functionality and under various conditions. The database is an ideal testbed for evaluating algorithms in real-world scenarios with respect to both dynamic and categorical variations.