Probabilistic terrain classification in unstructured environments

  • Authors:
  • Marcel Häselich;Marc Arends;Nicolai Wojke;Frank Neuhaus;Dietrich Paulus

  • Affiliations:
  • Active Vision Group, AGAS Robotics, Department of Computer Sciences, University of Koblenz-Landau, Universitätsstr.1, Koblenz, Germany;Active Vision Group, AGAS Robotics, Department of Computer Sciences, University of Koblenz-Landau, Universitätsstr.1, Koblenz, Germany;Working Group Realtime Systems, Department of Computer Sciences, University of Koblenz-Landau, Universitätsstr.1, Koblenz, Germany;Active Vision Group, AGAS Robotics, Department of Computer Sciences, University of Koblenz-Landau, Universitätsstr.1, Koblenz, Germany;Active Vision Group, AGAS Robotics, Department of Computer Sciences, University of Koblenz-Landau, Universitätsstr.1, Koblenz, Germany

  • Venue:
  • Robotics and Autonomous Systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Autonomous navigation in unstructured environments is a complex task and an active area of research in mobile robotics. Unlike urban areas with lanes, road signs, and maps, the environment around our robot is unknown and unstructured. Such an environment requires careful examination as it is random, continuous, and the number of perceptions and possible actions are infinite. We describe a terrain classification approach for our autonomous robot based on Markov Random Fields (MRFs ) on fused 3D laser and camera image data. Our primary data structure is a 2D grid whose cells carry information extracted from sensor readings. All cells within the grid are classified and their surface is analyzed in regard to negotiability for wheeled robots. Knowledge of our robot's egomotion allows fusion of previous classification results with current sensor data in order to fill data gaps and regions outside the visibility of the sensors. We estimate egomotion by integrating information of an IMU, GPS measurements, and wheel odometry in an extended Kalman filter. In our experiments we achieve a recall ratio of about 90% for detecting streets and obstacles. We show that our approach is fast enough to be used on autonomous mobile robots in real time.