Efficient off-road localization using visually corrected odometry
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
A dynamically configurable coprocessor for convolutional neural networks
Proceedings of the 37th annual international symposium on Computer architecture
Learning from Demonstration for Autonomous Navigation in Complex Unstructured Terrain
International Journal of Robotics Research
An AER spike-processing filter simulator and automatic VHDL generator based on cellular automata
IWANN'11 Proceedings of the 11th international conference on Artificial neural networks conference on Advances in computational intelligence - Volume Part I
Simplifying convnets for fast learning
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part II
Multiple ground plane estimation for 3D scene understanding using a monocular camera
Proceedings of the 27th Conference on Image and Vision Computing New Zealand
Deep feature learning using target priors with applications in ECoG signal decoding for BCI
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Robotics and Autonomous Systems
International Journal of Computer Vision
Hi-index | 0.00 |
Most vision-based approaches to mobile robotics suffer from the limitations imposed by stereo obstacle detection, which is short range and prone to failure. We present a self-supervised learning process for long-range vision that is able to accurately classify complex terrain at distances up to the horizon, thus allowing superior strategic planning. The success of the learning process is due to the self-supervised training data that are generated on every frame: robust, visually consistent labels from a stereo module; normalized wide-context input windows; and a discriminative and concise feature representation. A deep hierarchical network is trained to extract informative and meaningful features from an input image, and the features are used to train a real-time classifier to predict traversability. The trained classifier sees obstacles and paths from 5 to more than 100 m, far beyond the maximum stereo range of 12 m, and adapts very quickly to new environments. The process was developed and tested on the LAGR (Learning Applied to Ground Robots) mobile robot. Results from a ground truth data set, as well as field test results, are given. © 2009 Wiley Periodicals, Inc.