Mobile robot visual navigation using multiple features

  • Authors:
  • Nick Pears;Bojian Liang;Zezhi Chen

  • Affiliations:
  • Department of Computer Science, University of York, York, UK;Department of Computer Science, University of York, York, UK;Department of Computer Science, University of York, York, UK

  • Venue:
  • EURASIP Journal on Applied Signal Processing
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a method to segment the ground plane from a mobile robot's visual field of view and then measure the height of nonground plane features above the mobile robot's ground plane. Thus a mobile robot can determine what it can drive over, what it can drive under, and what it needs to manoeuvre around. In addition to obstacle avoidance, this data could also be used for localisation and map building. All of this is possible from an uncalibrated camera (raw pixel coordinates only), but is restricted to (near) pure translation motion of the camera. The main contributions are (i) a novel reciprocal-polar (RP) image rectification, (ii) ground plane segmentation by sinusoidal model fitting in RP-space, (iii) a novel projective construction for measuring affine height, and (iv) an algorithm that can make use of a variety of visual features and therefore operate in a wide variety of visual environments.