Trajectory Optimization using Reinforcement Learning for Map Exploration

  • Authors:
  • Thomas Kollar;Nicholas Roy

  • Affiliations:
  • MIT Computer Science and Artificial Intelligence Lab(CSAIL), The Stata Center, 32 Vassar Street, 32-331, Cambridge, MA 02139,;MIT Computer Science and Artificial Intelligence Lab(CSAIL), The Stata Center, 32 Vassar Street, 32-331, Cambridge, MA 02139,

  • Venue:
  • International Journal of Robotics Research
  • Year:
  • 2008

Quantified Score

Hi-index 0.03

Visualization

Abstract

Automatically building maps from sensor data is a necessary and fundamental skill for mobile robots; as a result, considerable research attention has focused on the technical challenges inherent in the mapping problem. While statistical inference techniques have led to computationally efficient mapping algorithms, the next major challenge in robotic mapping is to automate the data collection process. In this paper, we address the problem of how a robot should plan to explore an unknown environment and collect data in order to maximize the accuracy of the resulting map. We formulate exploration as a constrained optimization problem and use reinforcement learning to find trajectories that lead to accurate maps. We demonstrate this process in simulation and show that the learned policy not only results in improved map building, but that the learned policy also transfers successfully to a real robot exploring on MIT campus.