Bucket elimination: a unifying framework for reasoning
Artificial Intelligence
The Complexity of Decentralized Control of Markov Decision Processes
Mathematics of Operations Research
Sequential Optimality and Coordination in Multiagent Systems
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Distributed Sensor Networks: A Multiagent Perspective
Distributed Sensor Networks: A Multiagent Perspective
Exploiting structure in decentralized markov decision processes
Exploiting structure in decentralized markov decision processes
Cost-effective outbreak detection in networks
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
Constraint-based dynamic programming for decentralized POMDPs with structured interactions
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Networked distributed POMDPs: a synthesis of distributed constraint optimization and POMDPs
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 1
A scalable method for multiagent constraint optimization
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Message-passing algorithms for large structured decentralized POMDPs
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
Social Model Shaping for Solving Generic DEC-POMDPs
WI-IAT '11 Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Volume 02
Scalable multiagent planning using probabilistic inference
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Hi-index | 0.00 |
Planning under uncertainty for multiple agents has grown rapidly with the development of formal models such as multi-agent MDPs and decentralized MDPs. But despite their richness, the applicability of these models remains limited due to their computational complexity. We present the class of event-detecting multi-agent MDPs (eMMDPs), designed to detect multiple mobile targets by a team of sensor agents. We show that eMMDPs are NP-Hard and present a scalable 2-approximation algorithm for solving them using matroid theory and constraint optimization. The complexity of the algorithm is linear in the state-space and number of agents, quadratic in the horizon, and exponential only in a small parameter that depends on the interaction among the agents. Despite the worst-case approximation ratio of 2, experimental results show that the algorithm produces near-optimal policies for a range of test problems.