The complexity of decentralized control of Markov decision processes

  • Authors:
  • Daniel S. Bernstein;Shlomo Zilberstein;Neil Immerman

  • Affiliations:
  • Department of Computer Science, University of Massachusetts, Amherst, Massachusetts;Department of Computer Science, University of Massachusetts, Amherst, Massachusetts;Department of Computer Science, University of Massachusetts, Amherst, Massachusetts

  • Venue:
  • UAI'00 Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

Planning for distributed agents with partial state information is considered from a decisiontheoretic perspective. We describe generalizations of both the MDP and POMDP models that allow for decentralized control. For even a small number of agents, the finite-horizon problems corresponding to both of our models are complete for nondeterministic exponential time. These complexity results illustrate a fundamental difference between centralized and decentralized control of Markov processes. In contrast to the MDP and POMDP problems, the problems we consider provably do not admit polynomialtime algorithms and most likely require doubly exponential time to solve in the worst case. We have thus provided mathematical evidence corresponding to the intuition that decentralized planning problems cannot easily be reduced to centralized problems and solved exactly using established techniques.