Panoramic representation for route recognition by a mobile robot
International Journal of Computer Vision - Special issue on machine vision research at Osaka University
A Theory of Single-Viewpoint Catadioptric Image Formation
International Journal of Computer Vision
EPI Analysis of Omni-Camera Image
ICPR '00 Proceedings of the International Conference on Pattern Recognition - Volume 1
Expanding Possible View Points of Virtual Environment Using Panoramic Images
ICPR '00 Proceedings of the International Conference on Pattern Recognition - Volume 4
Robust Localization Using Panoramic View-Based Recognition
ICPR '00 Proceedings of the International Conference on Pattern Recognition - Volume 4
ICPR '00 Proceedings of the International Conference on Pattern Recognition - Volume 4
Memory-Based Self-Localization Using Omnidirectional Images
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 2 - Volume 2
Locating key views for image indexing of spaces
MIR '08 Proceedings of the 1st ACM international conference on Multimedia information retrieval
A mapping and localization framework for scalable appearance-based navigation
Computer Vision and Image Understanding
Key views for visualizing large spaces
Journal of Visual Communication and Image Representation
Self-location recognition using azimuth invariant features and wearable sensors
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Digesting omni-video along routes for navigation
Proceedings of the international conference on Multimedia
Vision-based navigation with efficient scene recognition
Intelligent Service Robotics
Homography optimization for consistent circular panorama generation
PSIVT'06 Proceedings of the First Pacific Rim conference on Advances in Image and Video Technology
Hi-index | 0.14 |
A route navigation method for a mobile robot with an omnidirectional image sensor is described. The route is memorized from a series of consecutive omnidirectional images of the horizon when the robot moves to its goal. While the robot is navigating to the goal point, input is matched against the memorized spatio-temporal route pattern by using dual active contour models and the exact robot position and orientation is estimated from the converged shape of the active contour models.