Multiagent teamwork: analyzing the optimality and complexity of key theories and models
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2
The Complexity of Decentralized Control of Markov Decision Processes
Mathematics of Operations Research
Approximate Solutions for Partially Observable Stochastic Games with Common Payoffs
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 1
Value-based observation compression for DEC-POMDPs
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Interaction-driven Markov games for decentralized multiagent planning under uncertainty
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Dynamic programming for partially observable stochastic games
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
Decentralized control of cooperative systems: categorization and complexity analysis
Journal of Artificial Intelligence Research
Optimal and approximate Q-value functions for decentralized POMDPs
Journal of Artificial Intelligence Research
Memory-bounded dynamic programming for DEC-POMDPs
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Taming decentralized POMDPs: towards efficient policy computation for multiagent settings
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Point-based policy generation for decentralized POMDPs
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
An investigation into mathematical programming for finite horizon decentralized POMDPs
Journal of Artificial Intelligence Research
Online planning for multi-agent systems with bounded communication
Artificial Intelligence
Scaling up optimal heuristic search in Dec-POMDPs via incremental expansion
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Exploiting model equivalences for solving interactive dynamic influence diagrams
Journal of Artificial Intelligence Research
Solving decentralized POMDP problems using genetic algorithms
Autonomous Agents and Multi-Agent Systems
Incremental clustering and expansion for faster optimal planning in decentralized POMDPs
Journal of Artificial Intelligence Research
Optimally solving dec-POMDPs as continuous-state MDPs
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Sufficient plan-time statistics for decentralized POMDPs
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
Decentralized partially observable Markov decision processes (Dec-POMDPs) constitute a generic and expressive framework for multiagent planning under uncertainty. However, planning optimally is difficult because solutions map local observation histories to actions, and the number of such histories grows exponentially in the planning horizon. In this work, we identify a criterion that allows for lossless clustering of observation histories: i.e., we prove that when two histories satisfy the criterion, they have the same optimal value and thus can be treated as one. We show how this result can be exploited in optimal policy search and demonstrate empirically that it can provide a speed-up of multiple orders of magnitude, allowing the optimal solution of significantly larger problems. We also perform an empirical analysis of the generality of our clustering method, which suggests that it may also be useful in other (approximate) Dec-POMDP solution methods.