MLESAC: a new robust estimator with application to estimating image geometry
Computer Vision and Image Understanding - Special issue on robusst statistical techniques in image understanding
A Flexible New Technique for Camera Calibration
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multiple view geometry in computer visiond
Multiple view geometry in computer visiond
Vision for Mobile Robot Navigation: A Survey
IEEE Transactions on Pattern Analysis and Machine Intelligence
Novel View Synthesis by Cascading Trilinear Tensors
IEEE Transactions on Visualization and Computer Graphics
Automatic Camera Recovery for Closed or Open Image Sequences
ECCV '98 Proceedings of the 5th European Conference on Computer Vision-Volume I - Volume I
Robust content-based image searches for copyright protection
MMDB '03 Proceedings of the 1st ACM international workshop on Multimedia databases
Lucas-Kanade 20 Years On: A Unifying Framework
International Journal of Computer Vision
An Efficient Solution to the Five-Point Relative Pose Problem
IEEE Transactions on Pattern Analysis and Machine Intelligence
Scale & Affine Invariant Interest Point Detectors
International Journal of Computer Vision
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Drift Detection and Removal for Sequential Structure from Motion Algorithms
IEEE Transactions on Pattern Analysis and Machine Intelligence
Stable Real-Time 3D Tracking Using Online and Offline Information
IEEE Transactions on Pattern Analysis and Machine Intelligence
Interactive Walkthroughs using "Morphable 3D-Mosaics"
3DPVT '04 Proceedings of the 3D Data Processing, Visualization, and Transmission, 2nd International Symposium
Iconic Memory-Based Omnidirectional Route Panorama Navigation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Image-based robot navigation from an image memory
Robotics and Autonomous Systems
MonoSLAM: Real-Time Single Camera SLAM
IEEE Transactions on Pattern Analysis and Machine Intelligence
Omnidirectional Vision Based Topological Navigation
International Journal of Computer Vision
Monocular Vision for Mobile Robot Localization and Autonomous Navigation
International Journal of Computer Vision
Parallel Tracking and Mapping for Small AR Workspaces
ISMAR '07 Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality
A framework for scalable vision-only navigation
ACIVS'07 Proceedings of the 9th international conference on Advanced concepts for intelligent vision systems
Feature tracking and matching in video using programmable graphics hardware
Machine Vision and Applications
Enhancing the point feature tracker by adaptive modelling of the feature support
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part II
Visual navigation with a time-independent varying reference
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
A sensor fusion framework using multiple particle filters for video-based navigation
IEEE Transactions on Intelligent Transportation Systems
Mapping and Localization for Mobile Robots through Environment Appearance Update
Proceedings of the 2010 conference on Artificial Intelligence Research and Development: Proceedings of the 13th International Conference of the Catalan Association for Artificial Intelligence
3D information extraction using Region-based Deformable Net for monocular robot navigation
Journal of Visual Communication and Image Representation
Using mutual information for appearance-based visual path following
Robotics and Autonomous Systems
Photometric visual servoing for omnidirectional cameras
Autonomous Robots
Exploiting temporal and spatial constraints in traffic sign detection from a moving vehicle
Machine Vision and Applications
Hi-index | 0.00 |
This paper presents a vision framework which enables feature-oriented appearance-based navigation in large outdoor environments containing other moving objects. The framework is based on a hybrid topological-geometrical environment representation, constructed from a learning sequence acquired during a robot motion under human control. At the higher topological layer, the representation contains a graph of key-images such that incident nodes share many natural landmarks. The lower geometrical layer enables to predict the projections of the mapped landmarks onto the current image, in order to be able to start (or resume) their tracking on the fly. The desired navigation functionality is achieved without requiring global geometrical consistency of the underlying environment representation. The framework has been experimentally validated in demanding and cluttered outdoor environments, under different imaging conditions. The experiments have been performed on many long sequences acquired from moving cars, as well as in large-scale real-time navigation experiments relying exclusively on a single perspective vision sensor. The obtained results confirm the viability of the proposed hybrid approach and indicate interesting directions for future work.