Autonomous exploration using rapid perception of low-resolution image information

  • Authors:
  • Vidya N. Murali;Stanley T. Birchfield

  • Affiliations:
  • Electrical and Computer Engineering Department, Clemson University, Clemson, USA 29634;Electrical and Computer Engineering Department, Clemson University, Clemson, USA 29634

  • Venue:
  • Autonomous Robots
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a technique for mobile robot exploration in unknown indoor environments using only a single forward-facing camera. Rather than processing all the data, the method intermittently examines only small 32脳24 downsampled grayscale images. We show that for the task of indoor exploration the visual information is highly redundant, allowing successful navigation even using only a small fraction of the available data. The method keeps the robot centered in the corridor by estimating two state parameters: the orientation within the corridor, and the distance to the end of the corridor. The orientation is determined by combining the results of five complementary measures, while the estimated distance to the end combines the results of three complementary measures. These measures, which are predominantly information-theoretic, are analyzed independently, and the combined system is tested in several unknown corridor buildings exhibiting a wide variety of appearances, showing the sufficiency of low-resolution visual information for mobile robot exploration. Because the algorithm discards such a large percentage of the pixels both spatially and temporally, processing occurs at an average of 1000 frames per second, thus freeing the processor for other concurrent tasks.