Mapping, navigation, and learning for off-road traversal

  • Authors:
  • Kurt Konolige;Motilal Agrawal;Morten Rufus Blas;Robert C. Bolles;Brian Gerkey;Joan Solà;Aravind Sundaresan

  • Affiliations:
  • Willow Garage, Menlo Park, California 94025;SRI International, Menlo Park, California 94025;Elektro-DTU University, Lyngby, Denmark;SRI International, Menlo Park, California 94025;Willow Garage, Menlo Park, California 94025;LAAS-CNRS, Toulouse, France;SRI International, Menlo Park, California 94025

  • Venue:
  • Journal of Field Robotics - Special Issue on LAGR Program, Part I
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The challenge in the DARPA Learning Applied to Ground Robots (LAGR) project is to autonomously navigate a small robot using stereo vision as the main sensor. During this project, we demonstrated a complete autonomous system for off-road navigation in unstructured environments, using stereo vision as the main sensor. The system is very robust—we can typically give it a goal position several hundred meters away and expect it to get there. In this paper we describe the main components that comprise the system, including stereo processing, obstacle and free space interpretation, long-range perception, online terrain traversability learning, visual odometry, map registration, planning, and control. At the end of 3 years, the system we developed outperformed all nine other teams in final blind tests over previously unseen terrain. © 2008 Wiley Periodicals, Inc.