Reinforcement learning for decentralized planning under uncertainty

  • Authors:
  • Landon Kraemer

  • Affiliations:
  • The University of Southern Mississippi, Hattiesburg, MS, USA

  • Venue:
  • Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Decentralized partially-observable Markov decision processes (Dec-POMDPs) are a powerful tool for modeling multi-agent planning and decision-making under uncertainty. Prevalent Dec-POMDP solution techniques require centralized computation given full knowledge of the underlying model. But in real world scenarios, model parameters may not be known a priori, or may be difficult to specify. We propose to address these limitations with distributed reinforcement learning (RL).