CVGIP: Image Understanding
Divergent stereo in autonomous navigation: from bees to robots
International Journal of Computer Vision - Special issue on qualitative vision
Experiences with an interactive museum tour-guide robot
Artificial Intelligence - Special issue on applications of artificial intelligence
Vision for Mobile Robot Navigation: A Survey
IEEE Transactions on Pattern Analysis and Machine Intelligence
Using an Image Retrieval System for Vision-Based Mobile Robot Localization
CIVR '02 Proceedings of the International Conference on Image and Video Retrieval
Incremental Learning for Vision-Based Navigation
ICPR '96 Proceedings of the International Conference on Pattern Recognition (ICPR '96) Volume IV-Volume 7472 - Volume 7472
High speed obstacle avoidance using monocular vision and reinforcement learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Image-based robot navigation from an image memory
Robotics and Autonomous Systems
Monocular Vision for Mobile Robot Localization and Autonomous Navigation
International Journal of Computer Vision
Robot steering with spectral image information
IEEE Transactions on Robotics
A Sensor Placement Algorithm for a Mobile Robot Inspection Planning
Journal of Intelligent and Robotic Systems
3D information extraction using Region-based Deformable Net for monocular robot navigation
Journal of Visual Communication and Image Representation
A fast robot homing approach using sparse image waypoints
Image and Vision Computing
Using mutual information for appearance-based visual path following
Robotics and Autonomous Systems
Photometric visual servoing for omnidirectional cameras
Autonomous Robots
Hi-index | 0.01 |
We present a simple approach for vision-based path following for a mobile robot. Based upon a novel concept called the funnel lane, the coordinates of feature points during the replay phase are compared with those obtained during the teaching phase in order to determine the turning direction. Increased robustness Is achieved by coupling the feature coordinates with odometry information. The system requires a single off-the-shelf, forward-looking camera with no calibration (either external or internal, including lens distortion). Implicit calibration of the system is needed only in the form of a single controller gain. The algorithm is qualitative in nature, requiring no map of the environment, no image Jacobian, no homography, no fundamental matrix, and no assumption about a flat ground plane. Experimental results demonstrate the capability of real-time autonomous navigation in both indoor and outdoor environments and on flat, slanted, and rough terrain with dynamic occluding objects for distances of hundreds of meters. We also demonstrate that the same approach works with wide-angle and omnidirectional cameras with only slight modification.