Speeded-Up Robust Features (SURF)
Computer Vision and Image Understanding
Autonomous Navigation in Dynamic Environments
Autonomous Navigation in Dynamic Environments
Navigating, Recognizing and Describing Urban Spaces With Vision and Lasers
International Journal of Robotics Research
Faster and Better: A Machine Learning Approach to Corner Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Vast-scale Outdoor Navigation Using Adaptive Relative Bundle Adjustment
International Journal of Robotics Research
Persistent Navigation and Mapping using a Biologically Inspired SLAM System
International Journal of Robotics Research
Visual teach and repeat for long-range rover autonomy
Journal of Field Robotics - Visual Mapping and Navigation Outdoors
Robotics and Autonomous Systems
Efficient Homography-Based Tracking and 3-D Reconstruction for Single-Viewpoint Sensors
IEEE Transactions on Robotics
BRIEF: Computing a Local Binary Descriptor Very Fast
IEEE Transactions on Pattern Analysis and Machine Intelligence
Image based detection of geometric changes in urban environments
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Hi-index | 0.00 |
This paper is about long-term navigation in environments whose appearance changes over time, suddenly or gradually. We describe, implement and validate an approach which allows us to incrementally learn a model whose complexity varies naturally in accordance with variation of scene appearance. It allows us to leverage the state of the art in pose estimation to build over many runs, a world model of sufficient richness to allow simple localisation despite a large variation in conditions. As our robot repeatedly traverses its workspace, it accumulates distinct visual experiences that in concert, implicitly represent the scene variation: each experience captures a visual mode. When operating in a previously visited area, we continually try to localise in these previous experiences while simultaneously running an independent vision-based pose estimation system. Failure to localise in a sufficient number of prior experiences indicates an insufficient model of the workspace and instigates the laying down of the live image sequence as a new distinct experience. In this way, over time we can capture the typical time-varying appearance of an environment and the number of experiences required tends to a constant. Although we focus on vision as a primary sensor throughout, the ideas we present here are equally applicable to other sensor modalities. We demonstrate our approach working on a road vehicle operating over a 3-month period at different times of day, in different weather and lighting conditions. We present extensive results analysing different aspects of the system and approach, in total processing over 136,000 frames captured from 37 km of driving.