Instantaneous robot self-localization and motion estimation with omnidirectional vision

  • Authors:
  • Libor Spacek;Christopher Burbridge

  • Affiliations:
  • Department of Computer Science, University of Essex, Wivenhoe Park, Colchester, CO4 3SQ, UK;Department of Computer Science, University of Essex, Wivenhoe Park, Colchester, CO4 3SQ, UK

  • Venue:
  • Robotics and Autonomous Systems
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents two related methods for autonomous visualguidance of robots: localization by trilateration, and interframemotion estimation. Both methods use coaxial omnidirectionalstereopsis (omnistereo), which returns the range r to objects orguiding points detected in the images. The trilateration methodachieves self-localization using r from the three nearest objectsat known positions. The interframe motion estimation is moregeneral, being able to use any features in an unknown environment.The guiding points are detected automatically on the basis of theirperceptual significance and thus they need not have either specialmarkings or be placed at known locations. The interframe motionestimation does not require previous motion history, making it wellsuited for detecting acceleration (in 20th of a second) and thussupporting dynamic models of robot's motion which will gain inimportance when autonomous robots achieve useful speeds. An initialestimate of the robot's rotation ω (the visual compass) isobtained from the angular optic flow in an omnidirectional image. Anew noniterative optic flow method has been developed for thispurpose. Adding ω to all observed (robot relative) bearingsθ gives true bearings towards objects (relative to a fixedcoordinate frame). The rotation ω and the r,θcoordinates obtained at two frames for a single fixed point atunknown location are sufficient to estimate the translation of therobot. However, a large number of guiding points are typicallydetected and matched in most real images. Each such point providesa solution for the robot's translation. The solutions are combinedby a robust clustering algorithm Clumat that reduces rotation andtranslation errors. Simulator experiments are included for all thepresented methods. Real images obtained from ScitosG5 autonomouslymoving robot were used to test the interframe rotation and to showthat the presented vision methods are applicable to real images inreal robotics scenarios.