Real-Time Simultaneous Localisation and Mapping with a Single Camera
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Information Theory, Inference & Learning Algorithms
Information Theory, Inference & Learning Algorithms
Active Search for Real-Time Vision
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 - Volume 01
Real Time Localization and 3D Reconstruction
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
MonoSLAM: Real-Time Single Camera SLAM
IEEE Transactions on Pattern Analysis and Machine Intelligence
Parallel Tracking and Mapping for Small AR Workspaces
ISMAR '07 Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality
Thin junction tree filters for simultaneous localization and mapping
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
FrameSLAM: From Bundle Adjustment to Real-Time Visual Mapping
IEEE Transactions on Robotics
iSAM: Incremental Smoothing and Mapping
IEEE Transactions on Robotics
LESS-mapping: Online environment segmentation based on spectral mapping
Robotics and Autonomous Systems
Information-theoretic compression of pose graphs for laser-based SLAM
International Journal of Robotics Research
Efficient keyframe-based real-time camera tracking
Computer Vision and Image Understanding
Hi-index | 0.00 |
In Simultaneous Localisation and Mapping (SLAM), it is well known that probabilistic filtering approaches which aim to estimate the robot and map state sequentially suffer from poor computational scaling to large map sizes. Various authors have demonstrated that this problem can be mitigated by approximations which treat estimates of features in different parts of a map as conditionally independent, allowing them to be processed separately. When it comes to the choice of how to divide a large map into such 'submaps', straightforward heuristics may be sufficient in maps built using sensors such as laser range-finders with limited range, where a regular grid of submap boundaries performs well. With visual sensing, however, the ideal division of submaps is less clear, since a camera has potentially unlimited range and will often observe spatially distant parts of a scene simultaneously. In this paper we present an efficient and generic method for automatically determining a suitable submap division for SLAM maps, and apply this to visual maps built with a single agile camera. We use the mutual information between predicted measurements of features as an absolute measure of correlation, and cluster highly correlated features into groups. Via tree factorisation, we are able to determine not just a single level submap division but a powerful fully hierarchical correlation and clustering structure. Our analysis and experiments reveal particularly interesting structure in visual maps and give pointers to more efficient approximate visual SLAM algorithms.