Teamwork in distributed POMDPs: execution-time coordination under model uncertainty

  • Authors:
  • Jun-young Kwak;Rong Yang;Zhengyu Yin;Matthew E. Taylor;Milind Tambe

  • Affiliations:
  • University of Southern California, Los Angeles, CA;University of Southern California, Los Angeles, CA;University of Southern California, Los Angeles, CA;Lafayette College, Easton, PA;University of Southern California, Los Angeles, CA

  • Venue:
  • The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Despite their NEXP-complete policy generation complexity [1], Distributed Partially Observable Markov Decision Problems (DEC-POMDPs) have become a popular paradigm for multiagent teamwork [2, 6, 8]. DEC-POMDPs are able to quantitatively express observational and action uncertainty, and yet optimally plan communications and domain actions.