Vision-Based SLAM in Real-Time

  • Authors:
  • Andrew J. Davison

  • Affiliations:
  • Department of Computing, Imperial College London, London SW7 2AZ, UK

  • Venue:
  • IbPRIA '07 Proceedings of the 3rd Iberian conference on Pattern Recognition and Image Analysis, Part I
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

When a world-observing camera moves through a scene capturing images continuously, it is possible to analyse the images to estimate its ego-motion, even if nothing is known in advance about the contents of the scene around it. The key to solving this apparently chicken-and-egg problem is to detect and repeatedly measure a number of salient `features' in the environment as the camera moves. Under the usual assumption that most of these are rigidly related in the world, the many geometric constraints on relative camera/feature locations provided by image measurements allow one to solve simultaneously for both the camera motion and the 3D world positions of the features. While global optimisation algorithms are able to achieve the most accurate solutions to this problem, the consistent theme of my research has been to develop `Simultaneous Localisation and Mapping' (SLAM) algorithms using probabilistic filtering which permit sequential, hard real-timeoperation.