Real-Time Simultaneous Localisation and Mapping with a Single Camera
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Active Search for Real-Time Vision
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 - Volume 01
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
MonoSLAM: Real-Time Single Camera SLAM
IEEE Transactions on Pattern Analysis and Machine Intelligence
IbPRIA '07 Proceedings of the 3rd Iberian conference on Pattern Recognition and Image Analysis, Part II
Real-time and robust monocular SLAM using predictive multi-resolution descriptors
ISVC'06 Proceedings of the Second international conference on Advances in Visual Computing - Volume Part II
Wide-area augmented reality using camera tracking and mapping in multiple regions
Computer Vision and Image Understanding
Hi-index | 0.00 |
When a world-observing camera moves through a scene capturing images continuously, it is possible to analyse the images to estimate its ego-motion, even if nothing is known in advance about the contents of the scene around it. The key to solving this apparently chicken-and-egg problem is to detect and repeatedly measure a number of salient `features' in the environment as the camera moves. Under the usual assumption that most of these are rigidly related in the world, the many geometric constraints on relative camera/feature locations provided by image measurements allow one to solve simultaneously for both the camera motion and the 3D world positions of the features. While global optimisation algorithms are able to achieve the most accurate solutions to this problem, the consistent theme of my research has been to develop `Simultaneous Localisation and Mapping' (SLAM) algorithms using probabilistic filtering which permit sequential, hard real-timeoperation.