A Computational Approach to Edge Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Map learning and high-speed navigation in RHINO
Artificial intelligence and mobile robots
Depth Estimation from Image Structure
IEEE Transactions on Pattern Analysis and Machine Intelligence
Bayesian Reconstruction of 3D Shapes and Scenes From A Single Image
HLK '03 Proceedings of the First IEEE International Workshop on Higher-Level Knowledge in 3D Modeling and Motion Analysis
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Introduction to Machine Learning (Adaptive Computation and Machine Learning)
Introduction to Machine Learning (Adaptive Computation and Machine Learning)
A Geometric Approach to Shape from Defocus
IEEE Transactions on Pattern Analysis and Machine Intelligence
High speed obstacle avoidance using monocular vision and reinforcement learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Structure from Motion with Wide Circular Field of View Cameras
IEEE Transactions on Pattern Analysis and Machine Intelligence
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Robust locally linear embedding
Pattern Recognition
MonoSLAM: Real-Time Single Camera SLAM
IEEE Transactions on Pattern Analysis and Machine Intelligence
Using depth features to retrieve monocular video shots
Proceedings of the 6th ACM international conference on Image and video retrieval
Recovering Surface Layout from an Image
International Journal of Computer Vision
3-D Depth Reconstruction from a Single Still Image
International Journal of Computer Vision
Speeded-Up Robust Features (SURF)
Computer Vision and Image Understanding
Omnidirectional vision scan matching for robot localization in dynamic environments
IEEE Transactions on Robotics
Robotics and Autonomous Systems
Hi-index | 0.00 |
We present a novel approach to estimating depth from single omnidirectional camera images by learning the relationship between visual features and range measurements available during a training phase. Our model not only yields the most likely distance to obstacles in all directions, but also the predictive uncertainties for these estimates. This information can be utilized by a mobile robot to build an occupancy grid map of the environment or to avoid obstacles during exploration-tasks that typically require dedicated proximity sensors such as laser range finders or sonars. We show in this paper how an omnidirectional camera can be used as an alternative to such range sensors. As the learning engine, we apply Gaussian processes, a nonparametric approach to function regression, as well as a recently developed extension for dealing with input-dependent noise. In practical experiments carried out in different indoor environments with a mobile robot equipped with an omnidirectional camera system, we demonstrate that our system is able to estimate range with an accuracy comparable to that of dedicated sensors based on sonar or infrared light.