Omnidirectional Vision for Appearance-Based Robot Localization

  • Authors:
  • Ben J. A. Kröse;Nikos A. Vlassis;R. Bunschoten

  • Affiliations:
  • -;-;-

  • Venue:
  • Revised Papers from the International Workshop on Sensor Based Intelligent Robots
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

Mobile robots need an internal representation of their environment to do useful things. Usually such a representation is some sort of geometric model. For our robot, which is equipped with a panoramic vision system, we choose an appearance model in which the sensoric data (in our case the panoramic images) have to be modeled as a function of the robot position. Because images are very high-dimensional vectors, a feature extraction is needed before the modeling step. Very often a linear dimension reduction is used where the projection matrix is obtained from a Principal Component Analysis (PCA). PCA is optimal for the reconstruction of the data, but not necessarily the best linear projection for the localization task. We derived a method which extracts linear features optimal with respect to a risk measure reflecting the localization performance. We tested the method on a real navigation problem and compared it with an approach where PCA features were used.