A framework for spatiotemporal control in the tracking of visual contours
International Journal of Computer Vision
CONDENSATION—Conditional Density Propagation forVisual Tracking
International Journal of Computer Vision
Object Tracking Using Deformable Templates
IEEE Transactions on Pattern Analysis and Machine Intelligence
International Journal of Computer Vision - Special issue on image-based servoing
Vision for Mobile Robot Navigation: A Survey
IEEE Transactions on Pattern Analysis and Machine Intelligence
Geometric Properties of Central Catadioptric Line Images
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part IV
Mirrors in motion: Epipolar geometry and motion estimation
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
A tutorial on particle filters for online nonlinear/non-GaussianBayesian tracking
IEEE Transactions on Signal Processing
Visual navigation of a quadrotor aerial vehicle
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Fast pose estimation for visual navigation using homographies
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Topological maps based on graphs of planar regions
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Multiple homographies with omnidirectional vision for robot homing
Robotics and Autonomous Systems
Distributed multi-camera visual mapping using topological maps of planar regions
Pattern Recognition
Vision-based exponential stabilization of mobile robots
Autonomous Robots
A fast robot homing approach using sparse image waypoints
Image and Vision Computing
Hi-index | 0.00 |
When navigating in an unknown environment for the first time, a natural behavior consists on memorizing some key views along the performed path, in order to use these references as checkpoints for a future navigation mission. The navigation framework for wheeled mobile robots presented in this paper is based on this assumption. During a human-guided learning step, the robot performs paths which are sampled and stored as a set of ordered key images, acquired by an embedded camera. The set of these obtained visual paths is topologically organized and provides a visual memory of the environment. Given an image of one of the visual paths as a target, the robot navigation mission is defined as a concatenation of visual path subsets, called visual route. When running autonomously, the robot is controlled by a visual servoing law adapted to its nonholonomic constraint. Based on the regulation of successive homographies, this control guides the robot along the reference visual route without explicitly planning any trajectory. The proposed framework has been designed for the entire class of central catadioptric cameras (including conventional cameras). It has been validated onto two architectures. In the first one, algorithms have been implemented onto a dedicated hardware and the robot is equipped with a standard perspective camera. In the second one, they have been implemented on a standard PC and an omnidirectional camera is considered.