Journal of Intelligent and Robotic Systems
Performance evaluation of visual SLAM using several feature extractors
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Markerless augmented reality for robotic helicoptor applications
RobVis'08 Proceedings of the 2nd international conference on Robot vision
GPU-accelerated robotic intra-operative laparoscopic 3D reconstruction
IPCAI'10 Proceedings of the First international conference on Information processing in computer-assisted interventions
Mobile robot localization through identifying spatial relations from detected corners
IWINAC'11 Proceedings of the 4th international conference on Interplay between natural and artificial computation: new challenges on bioinspired applications - Volume Part II
Learning spatially semantic representations for cognitive robot navigation
Robotics and Autonomous Systems
Hi-index | 0.00 |
We present a performance evaluation framework for visual feature extraction and matching in the visual simultaneous localization and mapping (SLAM) context. Although feature extraction is a crucial component, no qualitative study comparing different techniques from the visual SLAM perspective exists. We extend previous image pair evaluationmethods to handle non-planar scenes and the multiple image sequence requirements of our application, and compare three popular feature extractors used in visual SLAM: the Harris corner detector, the Kanade-Lucas-Tomasi tracker (KLT), and the Scale-Invariant Feature Transform (SIFT). We present results from a typical indoor environment in the form of recall/precision curves, and also investigate the effect of increasing distance between image viewpoints on extractor performance. Our results show that all methods can be made to perform well, although it is possible to distinguish between the three. We conclude by presenting guidelines for selecting a feature extractor for visual SLAM based on our experiments.