Robot pose estimation by means of a stereo vision system

  • Authors:
  • J. Sogorb;O. Reinoso;A. Gil;L. Paya

  • Affiliations:
  • Automation, Robotics and Computer Vision Systems Engineering and Automation Department, University Miguel Hernández, Elche, Alicante, Spain;Automation, Robotics and Computer Vision Systems Engineering and Automation Department, University Miguel Hernández, Elche, Alicante, Spain;Automation, Robotics and Computer Vision Systems Engineering and Automation Department, University Miguel Hernández, Elche, Alicante, Spain;Automation, Robotics and Computer Vision Systems Engineering and Automation Department, University Miguel Hernández, Elche, Alicante, Spain

  • Venue:
  • WSEAS Transactions on Systems and Control
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Mobile robots are characterised by their capacity to move autonomously in an environment that is either known or unknown or only partially known. Their uses and applications are wide and are incorporated into a great many fields including underground and submarine work, space missions, security systems, military applications, and many more. It is for this reason that that a mobile robot is rarely fitted with only one sensor to carry out all of its multiple tasks, being much more frequent the use of various sensors combined within the system that complement one another to complete their different functions. In this way it is possible to find robots where estimation of position1 and the updating of the map is carried out by video cameras or laser scanners, while obstacle detection is achieved using sonar. In this respect it is important to highlight the close relationship that exists between the problem of position estimation and that of the construction of a map of the surroundings, with exact localisation of the robot necessary to be able to carry out map construction and vice versa. In this work we focus solely on the problem of localisation, comparing different estimation algorithms of the trajectory taken by a robot from the observations and readings obtained by the robot itself. In our problem, we will work with images taken by a stereoscopic vision system of uncalibrated cameras, we will assume that the movement of the robot is on a flat surface and we will use natural landmarks. As we will see, the information obtained from this type of sensor allows a robust estimation of movement taken between each pair of observations without the need to use the information from the robot's propioceptive sensors. The solution of this problem, known as visual odometry, is critical within the majority of subsequent navigation processes.