The Complexity of Decentralized Control of Markov Decision Processes
Mathematics of Operations Research
The complexity of multiagent systems: the price of silence
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
An Application of Automated Negotiation to Distributed Task Allocation
IAT '07 Proceedings of the 2007 IEEE/WIC/ACM International Conference on Intelligent Agent Technology
The Journal of Machine Learning Research
Exploiting locality of interaction in factored Dec-POMDPs
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Constraint-based dynamic programming for decentralized POMDPs with structured interactions
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Point-based dynamic programming for DEC-POMDPs
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Networked distributed POMDPs: a synthesis of distributed constraint optimization and POMDPs
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 1
Solving transition independent decentralized Markov decision processes
Journal of Artificial Intelligence Research
Computing factored value functions for policies in structured MDPs
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Towards a unifying characterization for quantifying weak coupling in dec-POMDPs
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Distributed model shaping for scaling to decentralized POMDPs with hundreds of agents
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
Scalable multiagent planning using probabilistic inference
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Hi-index | 0.00 |
The Decentralized Partially Observable Markov Decision Process (Dec-POMDP) is a powerful model for multiagent planning under uncertainty, but its applicability is hindered by its high complexity - solving Dec-POMDPs optimally is NEXP-hard. Recently, Kumar et al. introduced the Value Factorization (VF) framework, which exploits decomposable value functions that can be factored into subfunctions. This framework has been shown to be a generalization of several models that leverage sparse agent interactions such as TI-Dec-MDPs, NDPOMDPs and TD-POMDPs. Existing algorithms for these models assume that the interaction graph of the problem is given. In this paper, we introduce three algorithms to automatically generate interaction graphs for models within the VF framework and establish lower and upper bounds on the expected reward of an optimal joint policy. We illustrate experimentally the benefits of these techniques for sensor placement in a decentralized tracking application.