Solving the online SLAM problem with an omnidirectional vision system

  • Authors:
  • Vitor Campanholo Guizilini;Jun Okamoto, Jr.

  • Affiliations:
  • Advanced Perception Laboratory, Department of Mechatronics and Mechanical Systems Engineering, Escola Politécnica da Universidade de São Paulo, São Paulo, SP, Brazil;Advanced Perception Laboratory, Department of Mechatronics and Mechanical Systems Engineering, Escola Politécnica da Universidade de São Paulo, São Paulo, SP, Brazil

  • Venue:
  • ICONIP'08 Proceedings of the 15th international conference on Advances in neuro-information processing - Volume Part I
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

A solution to the problem of simultaneous localization and mapping, known as the problem of SLAM, would be of inestimable value to the field of autonomous robots. One possible approach to this problem depends on the establishment of landmarks in the environment, using artificial structures or predetermined objects that limit their applicability in general tasks. This paper presents a solution to the problem of SLAM that relies on an omnidirectional vision system to create a sparse landmark map composed of natural structures recognized from the environment, used during navigation to correct odometric errors accumulated over time. Visual sensors are a natural and compact way of achieving the rich and wide characterization of the environment necessary to extract natural landmarks, and the omnidirectional vision increases the amount of information received at each instant. This solution has been tested in real navigational situations and the results show that omnidirectional vision sensors are a valid and desirable way of obtaining the information needed to solve the problem of SLAM.