Omni-Directional Vision for Robot Navigation
OMNIVIS '00 Proceedings of the IEEE Workshop on Omnidirectional Vision
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
From omnidirectional images to hierarchical localization
Robotics and Autonomous Systems
International Journal of Robotics Research
Evolution of homing navigation in a real mobile robot
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Proceedings of the 2010 conference on Artificial Intelligence Research and Development: Proceedings of the 13th International Conference of the Catalan Association for Artificial Intelligence
Testing image segmentation for topological SLAM with omnidirectional images
MICAI'10 Proceedings of the 9th Mexican international conference on Advances in artificial intelligence: Part I
An extended-HCT semantic description for visual place recognition
International Journal of Robotics Research
Hi-index | 0.00 |
Mobile robots rely on their ability of scene recognition to build a topological map of the environment and perform location-related tasks. In this paper, we describe a novel lightweight scene recognition method using an adaptive descriptor which is based on color features and geometric information for omnidirectional vision. Our method enables the robot to add nodes to a topological map automatically and solve the localization problem of mobile robot in realtime. The descriptor of a scene is extracted in the YUV color space and its dimension is adaptive depending on the segmentation result of the panoramic image. Furthermore, the descriptor is invariant to rotation and slight changes of illumination. The robustness of the scene matching and recognition is tested through real experiments in a dynamic indoor environment. The experiment is carried out on a mobile robot equipped with an omnidirectional camera. In our tests, the average processing time is 30 ms for each frame including feature extraction, matching, and the adding of new nodes.