Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Geometric Properties of Central Catadioptric Line Images and Their Application in Calibration
IEEE Transactions on Pattern Analysis and Machine Intelligence
Robotics and Autonomous Systems
From omnidirectional images to hierarchical localization
Robotics and Autonomous Systems
Robotics and Autonomous Systems
Omnidirectional Vision Based Topological Navigation
International Journal of Computer Vision
Towards semantic maps for mobile robots
Robotics and Autonomous Systems
A mapping and localization framework for scalable appearance-based navigation
Computer Vision and Image Understanding
Appearance-Based SLAM for Mobile Robots
Proceedings of the 2009 conference on Artificial Intelligence Research and Development: Proceedings of the 12th International Conference of the Catalan Association for Artificial Intelligence
Real-time monocular visual odometry for on-road vehicles with 1-point RANSAC
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
A Minimalistic Approach to Appearance-Based Visual SLAM
IEEE Transactions on Robotics
Hi-index | 0.00 |
The strength of appearance-based mapping models lies in their ability to represent the environment through high-level image features; and provide humanreadable information. However, developing localization and mapping methods with these models could be very challenging, especially if robots must deal with long-term mapping, localization, navigation, occlusions, and dynamic environments. This paper proposes an appearance-based mapping and localization method based on the human memory model, which is used to build a Feature Stability Histogram (FSH) at each node in the robot topological map, these FSH register local feature stability over time through a voting scheme, and most stable features are considered for mapping and Bayesian localization. Experimental results are presented using omnidirectional images acquired through long-term acquisition considering: illumination changes (day time and seasons), occlusions, random removal of features, and perceptual aliasing. This method is able to adapt the internal node's representation through time to achieve global and local robot localization.