The Complexity of Decentralized Control of Markov Decision Processes
Mathematics of Operations Research
Distributed Sensor Networks: A Multiagent Perspective
Distributed Sensor Networks: A Multiagent Perspective
Approximate Solutions for Partially Observable Stochastic Games with Common Payoffs
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 1
Decentralized planning under uncertainty for teams of communicating agents
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Dynamic programming for partially observable stochastic games
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
Solving transition independent decentralized Markov decision processes
Journal of Artificial Intelligence Research
Taming decentralized POMDPs: towards efficient policy computation for multiagent settings
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Bounded policy iteration for decentralized POMDPs
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Hi-index | 0.00 |
A problem of planning for cooperative teams under uncertainty is a crucial one in multiagent systems. Decentralized partially observable Markov decision processes (DEC-POMDPs) provide a convenient, but intractable model for specifying planning problems in cooperative teams. Compared to the single-agent case, an additional challenge is posed by the lack of free communication between the teammates. We argue, that acting close to optimally in a team involves a tradeoff between opportunistically taking advantage of agent's local observations and being predictable for the teammates. We present a more opportunistic version of an existing approximate algorithm for DEC-POMDPs and investigate the tradeoff. Preliminary evaluation shows that in certain settings oportunistic modification provides significantly better performance.