The Complexity of Decentralized Control of Markov Decision Processes
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Optimizing information exchange in cooperative multi-agent systems
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Communication for Improving Policy Computation in Distributed POMDPs
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 3
Agent interaction in distributed POMDPs and its implications on complexity
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Exploiting factored representations for decentralized execution in multiagent teams
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Letting loose a SPIDER on a network of POMDPs: generating quality guaranteed policies
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Networked distributed POMDPs: a synthesis of distributed constraint optimization and POMDPs
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 1
Anytime point-based approximations for large POMDPs
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
While Distributed POMDPs have become popular for modeling multiagent systems in uncertain domains, it is the Network Distributed POMDPs (ND-POMDPs) model that has begun to scale-up the number of agents. The ND-POMDPs can utilize the locality in agents' interactions. However, prior work in ND-POMDPs has failed to address communication. Without communication, the size of a local policy at each agent within the ND-POMDPs grows exponentially in the time horizon. To overcome this problem, we extend existing algorithms so that agents periodically communicate their observation and action histories with each other. After communication, agents can start from new synchronized belief state. Thus, we can avoid the exponential growth in the size of local policies at agents. Furthermore, we introduce an idea that is similar the Point-based Value Iteration algorithm to approximate the value function with a fixed number of representative points. Our experimental results show that we can obtain much longer policies than existing algorithms as long as the interval between communications is small.