Distributed value functions for the coordination of decentralized decision makers

  • Authors:
  • Laëtitia Matignon;Laurent Jeanpierre;Abdel-Illah Mouaddib

  • Affiliations:
  • Université de Caen Basse-Normandie, France;Université de Caen Basse-Normandie, France;Université de Caen Basse-Normandie, France

  • Venue:
  • Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose an approach based on an interaction-oriented resolution of decentralized Markov decision processes (Dec-MDPs) primary motivated by a real-world application of decentralized decision makers to explore and map an unknown environment. This interaction-oriented resolution is based on distributed value functions (DVF) techniques that decouple the multi-agent problem into a set of individual agent problems and consider possible interactions among agents as a separate layer. This leads to a significant reduction of the computational complexity by solving Dec-MDPs as a collection of MDPs. Using this model in multi-robot exploration scenarios, we show that each robot computes locally a strategy that minimizes the interactions between the robots and maximizes the space coverage of the team. Our technique has been implemented and evaluated in simulation and in real-world scenarios during a robotic challenge for the exploration and mapping of an unknown environment by mobile robots. Experimental results from real-world scenarios and from the challenge are given where our system was vice-champion.