The Complexity of Decentralized Control of Markov Decision Processes
Mathematics of Operations Research
The complexity of multiagent systems: the price of silence
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Anytime coordination using separable bilinear programs
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Solving transition independent decentralized Markov decision processes
Journal of Artificial Intelligence Research
Offline Planning for Communication by Exploiting Structured Interactions in Decentralized MDPs
WI-IAT '09 Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology - Volume 02
A bilinear programming approach for multiagent planning
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
Decentralized Markov Decision Processes are a powerful general model of decentralized, cooperative multi-agent problem solving. The high complexity of the general problem leads to a focus on restricted models. While worst-case hardness of such reduced problems is often better, less is known about the actual difficulty of given instances. We show tight connections between the structure of agent interactions and the essential dimensionality of various problems. Bounds are placed on problem difficulty, given restrictions on the type and number of interactions between agents. These bounds arise from a bilinear programming formulation of the problem; from such a formulation, a more compact reduced form can be automatically generated, and the original problem rewritten to take advantage of the reduction.