Artificial Intelligence Review - Special issue on lazy learning
Theory and Practice of Projective Rectification
International Journal of Computer Vision
Fast and Globally Convergent Pose Estimation from Video Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multiple view geometry in computer visiond
Multiple view geometry in computer visiond
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms
International Journal of Computer Vision
Bundle Adjustment - A Modern Synthesis
ICCV '99 Proceedings of the International Workshop on Vision Algorithms: Theory and Practice
Linear Pose Estimation from Points or Lines
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Performance Evaluation of Local Descriptors
IEEE Transactions on Pattern Analysis and Machine Intelligence
Photo tourism: exploring photo collections in 3D
ACM SIGGRAPH 2006 Papers
MonoSLAM: Real-Time Single Camera SLAM
IEEE Transactions on Pattern Analysis and Machine Intelligence
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing
International Journal of Robotics Research
Stereo Processing by Semiglobal Matching and Mutual Information
IEEE Transactions on Pattern Analysis and Machine Intelligence
Speeded-Up Robust Features (SURF)
Computer Vision and Image Understanding
FAB-MAP: Probabilistic Localization and Mapping in the Space of Appearance
International Journal of Robotics Research
SBA: A software package for generic sparse bundle adjustment
ACM Transactions on Mathematical Software (TOMS)
Parallel Tracking and Mapping for Small AR Workspaces
ISMAR '07 Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality
Generic and real-time structure from motion using local bundle adjustment
Image and Vision Computing
A RANSAC-based approach to model fitting and its application to finding cylinders in range data
IJCAI'81 Proceedings of the 7th international joint conference on Artificial intelligence - Volume 2
Flow separation for fast and robust stereo odometry
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
A visual odometry framework robust to motion blur
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
A comparative evaluation of interest point detectors and local descriptors for visual SLAM
Machine Vision and Applications
Conjugate gradient bundle adjustment
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part II
Efficient large-scale stereo matching
ACCV'10 Proceedings of the 10th Asian conference on Computer vision - Volume Part I
RSLAM: A System for Large-Scale Mapping in Constant-Time Using Stereo
International Journal of Computer Vision
Fast and Incremental Method for Loop-Closure Detection Using Bags of Visual Words
IEEE Transactions on Robotics
FrameSLAM: From Bundle Adjustment to Real-Time Visual Mapping
IEEE Transactions on Robotics
Generalized subgraph preconditioners for large-scale bundle adjustment
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Hi-index | 0.00 |
In this paper, we propose a real-time vision-based localization approach for humanoid robots using a single camera as the only sensor. In order to obtain an accurate localization of the robot, we first build an accurate 3D map of the environment. In the map computation process, we use stereo visual SLAM techniques based on non-linear least squares optimization methods (bundle adjustment). Once we have computed a 3D reconstruction of the environment, which comprises of a set of camera poses (keyframes) and a list of 3D points, we learn the visibility of the 3D points by exploiting all the geometric relationships between the camera poses and 3D map points involved in the reconstruction. Finally, we use the prior 3D map and the learned visibility prediction for monocular vision-based localization. Our algorithm is very efficient, easy to implement and more robust and accurate than existing approaches. By means of visibility prediction we predict for a query pose only the highly visible 3D points, thus, speeding up tremendously the data association between 3D map points and perceived 2D features in the image. In this way, we can solve very efficiently the Perspective-n-Point (PnP) problem providing robust and fast vision-based localization. We demonstrate the robustness and accuracy of our approach by showing several vision-based localization experiments with the HRP-2 humanoid robot.