A Theory of Single-Viewpoint Catadioptric Image Formation
International Journal of Computer Vision
Multiple view geometry in computer visiond
Multiple view geometry in computer visiond
Vision for Mobile Robot Navigation: A Survey
IEEE Transactions on Pattern Analysis and Machine Intelligence
Epipolar Geometry for Central Catadioptric Cameras
International Journal of Computer Vision
An Efficient Solution to the Five-Point Relative Pose Problem
IEEE Transactions on Pattern Analysis and Machine Intelligence
A unifying geometric representation for central projection systems
Computer Vision and Image Understanding - Special issue on omnidirectional vision and camera networks
From omnidirectional images to hierarchical localization
Robotics and Autonomous Systems
A visual landmark framework for mobile robot navigation
Image and Vision Computing
Monocular Vision for Mobile Robot Localization and Autonomous Navigation
International Journal of Computer Vision
Using mutual information for appearance-based visual path following
Robotics and Autonomous Systems
Photometric visual servoing for omnidirectional cameras
Autonomous Robots
Hi-index | 0.00 |
In this paper, we present a complete framework for autonomous vehicle navigation using a single camera and natural landmarks. When navigating in an unknown environment for the first time, usual behavior consists of memorizing some key views along the performed path to use these references as checkpoints for future navigation missions. The navigation framework for the wheeled vehicles presented in this paper is based on this assumption. During a human-guided learning step, the vehicle performs paths that are sampled and stored as a set of ordered key images, as acquired by an embedded camera. The visual paths are topologically organized, providing a visual memory of the environment. Given an image of the visual memory as a target, the vehicle navigation mission is defined as a concatenation of visual path subsets called visual routes. When autonomously running, the control guides the vehicle along the reference visual route without explicitly planning any trajectory. The control consists of a vision-based control law that is adapted to the nonholonomic constraint. Our navigation framework has been designed for a generic class of cameras (including conventional, catadioptric, and fisheye cameras). Experiments with an urban electric vehicle navigating in an outdoor environment have been carried out with a fisheye camera along a 750-m-long trajectory. Results validate our approach.