Robust recognition using eigenimages
Computer Vision and Image Understanding - Special issue on robusst statistical techniques in image understanding
Catadioptric Projective Geometry
International Journal of Computer Vision
Zero Phase Representation of Panoramic Images for Image Vased Localization
CAIP '99 Proceedings of the 8th International Conference on Computer Analysis of Images and Patterns
Using an Image Retrieval System for Vision-Based Mobile Robot Localization
CIVR '02 Proceedings of the International Conference on Image and Video Retrieval
Object Recognition from Local Scale-Invariant Features
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
Empirical Evaluation of Dissimilarity Measures for Color and Texture
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Image Retrieval by Local Evaluation of Nonlinear Kernel Functions around Salient Points
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 2 - Volume 02
Cognitive maps for mobile robots-an object based approach
Robotics and Autonomous Systems
From omnidirectional images to hierarchical localization
Robotics and Autonomous Systems
Robotics and Autonomous Systems
DAGM'06 Proceedings of the 28th conference on Pattern Recognition
ACIVS'06 Proceedings of the 8th international conference on Advanced Concepts For Intelligent Vision Systems
IEEE Transactions on Robotics
An accurate and robust visual-compass algorithm for robot-mounted omnidirectional cameras
Robotics and Autonomous Systems
Hi-index | 0.00 |
This paper describes a method for spatial representation, place recognition and qualitative self-localization in dynamic indoor environments, based on omnidirectional images. This is a difficult problem because of the perceptual ambiguity of the acquired images, and their weak robustness to noise, geometrical and photometric variations of real world scenes. The spatial representation is built up invariant signatures using Invariance Theory where we suggest to adapt Haar invariant integrals to the particular geometry and image transformations of catadioptric omnidirectional sensors. It follows that combining simple image features in a process of integration over visual transformations and robot motion, can build discriminant percepts about robot spatial locations. We further analyze the invariance properties of the signatures and the apparent relation between their similarity measures and metric distances. The invariance properties of the signatures can be adapted to infer a hierarchical process, from global room recognition to local and coarse robot localization.The approach is validated in real world experiments and compared to some local and global state-of-the-art methods. The results demonstrate a very interesting performance of the proposed approach and show distinctive behaviors of global and local methods. The invariant signature method, while being very time and memory efficient, provides good separability results similarly to approaches based on local features.