Mobile Robotics Planning Using Abstract Markov Decision Processes

  • Authors:
  • Pierre Laroche;Francois Charpillet;Rene Schott

  • Affiliations:
  • -;-;-

  • Venue:
  • ICTAI '99 Proceedings of the 11th IEEE International Conference on Tools with Artificial Intelligence
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

Markov Decision Processes have been successfully used in robotics for indoor robot navigation problems. They allow to compute optimal sequences of actions in order to achieve a given goal, accounting for actuators uncertainties. But MDPs are weak to avoid unknown obstacles. At the opposite reactive navigators are particularly adapted to that, and don't need any prior knowledge about the environment. But they are unable to plan the set of actions that will permit the realization of a given mission. We present a new state aggregation technique for Markov Decision Processes, such that part of the work usually dedicated to the planner is achieved by a reactive navigator. Thus some characteristics of our environments, such as width of corridors, have not to be considered, which allows to cluster states together, significantly reducing the state space. As a consequence, policies are computed faster and are shown to be at least as efficient as optimal ones.