The Complexity of Decentralized Control of Markov Decision Processes
Mathematics of Operations Research
Learning to Cooperate via Policy Search
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Approximate Solutions for Partially Observable Stochastic Games with Common Payoffs
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 1
Region-based incremental pruning for POMDPs
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Dynamic programming for partially observable stochastic games
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
Solving POMDPs by searching the space of finite policies
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Solving POMDPs by searching in policy space
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Incremental pruning: a simple, fast, exact method for partially observable Markov decision processes
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Decentralized planning under uncertainty for teams of communicating agents
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Winning back the CUP for distributed POMDPs: planning over continuous belief spaces
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Solving POMDPs using quadratically constrained linear programs
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Exact solutions of interactive POMDPs using behavioral equivalence
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Letting loose a SPIDER on a network of POMDPs: generating quality guaranteed policies
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Subjective approximate solutions for decentralized POMDPs
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Not all agents are equal: scaling up distributed POMDPs for agent networks
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Value-based observation compression for DEC-POMDPs
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Interaction-driven Markov games for decentralized multiagent planning under uncertainty
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Solving Large-Scale and Sparse-Reward DEC-POMDPs with Correlation-MDPs
RoboCup 2007: Robot Soccer World Cup XI
Achieving goals in decentralized POMDPs
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Point-based dynamic programming for DEC-POMDPs
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Optimal and approximate Q-value functions for decentralized POMDPs
Journal of Artificial Intelligence Research
Policy iteration for decentralized control of Markov decision processes
Journal of Artificial Intelligence Research
Memory-bounded dynamic programming for DEC-POMDPs
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Solving POMDPs using quadratically constrained linear programs
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Introducing Communication in Dis-POMDPs with Finite State Machines
WI-IAT '09 Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology - Volume 02
Conformant plans and beyond: Principles and complexity
Artificial Intelligence
A PGM framework for recursive modeling of players in simple sequential Bayesian games
International Journal of Approximate Reasoning
Point-based policy generation for decentralized POMDPs
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Optimizing fixed-size stochastic controllers for POMDPs and decentralized POMDPs
Autonomous Agents and Multi-Agent Systems
An investigation into mathematical programming for finite horizon decentralized POMDPs
Journal of Artificial Intelligence Research
Point-based bounded policy iteration for decentralized POMDPs
PRICAI'10 Proceedings of the 11th Pacific Rim international conference on Trends in artificial intelligence
Online planning for multi-agent systems with bounded communication
Artificial Intelligence
Two decades of multiagent teamwork research: past, present, and future
CARE@AI'09/CARE@IAT'10 Proceedings of the CARE@AI 2009 and CARE@IAT 2010 international conference on Collaborative agents - research and development
Solving efficiently Decentralized MDPs with temporal and resource constraints
Autonomous Agents and Multi-Agent Systems
An optimal best-first search algorithm for solving infinite horizon DEC-POMDPs
ECML'05 Proceedings of the 16th European conference on Machine Learning
Exploiting symmetries for single- and multi-agent Partially Observable Stochastic Domains
Artificial Intelligence
Efficient planning for factored infinite-horizon DEC-POMDPs
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume One
Generalized and bounded policy iteration for finitely-nested interactive POMDPs: scaling up
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
On the Computational Complexity of Stochastic Controller Optimization in POMDPs
ACM Transactions on Computation Theory (TOCT)
Solving decentralized POMDP problems using genetic algorithms
Autonomous Agents and Multi-Agent Systems
Hi-index | 0.00 |
We present a bounded policy iteration algorithm for infinite-horizon decentralized POMDPs. Policies are represented as joint stochastic finite-state controllers, which consist of a local controller for each agent. We also let a joint controller include a correlation device that allows the agents to correlate their behavior without exchanging information during execution, and show that this leads to improved performance. The algorithm uses a fixed amount of memory, and each iteration is guaranteed to produce a controller with value at least as high as the previous one for all possible initial state distributions. For the case of a single agent, the algorithm reduces to Poupart and Boutilier's bounded policy iteration for POMDPs.