The complexity of Markov decision processes
Mathematics of Operations Research
Controlling cooperative problem solving in industrial multi-agent systems using joint intentions
Artificial Intelligence
Complexity of finite-horizon Markov decision process problems
Journal of the ACM (JACM)
Communication decisions in multi-agent cooperation: model and experiments
Proceedings of the fifth international conference on Autonomous agents
Multi-agent policies: from centralized ones to decentralized ones
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 3
The Complexity of Decentralized Control of Markov Decision Processes
Mathematics of Operations Research
Sequential Optimality and Coordination in Multiagent Systems
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Learning to Cooperate via Policy Search
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Transition-independent decentralized markov decision processes
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Optimizing information exchange in cooperative multi-agent systems
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
The communicative multiagent team decision problem: analyzing teamwork theories and models
Journal of Artificial Intelligence Research
Journal of Artificial Intelligence Research
A polynomial algorithm for decentralized Markov decision processes with temporal constraints
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Agent interaction in distributed POMDPs and its implications on complexity
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Exact solutions of interactive POMDPs using behavioral equivalence
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Reinforcement learning for DEC-MDPs with changing action sets and partially ordered dependencies
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 3
Recent Advances in Reinforcement Learning
Commitment-based service coordination
International Journal of Agent-Oriented Software Engineering
Planning with continuous resources for agent teams
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Flexible approximation of structured interactions in decentralized Markov decision processes
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
An iterative algorithm for solving constrained decentralized Markov decision processes
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Solving transition independent decentralized Markov decision processes
Journal of Artificial Intelligence Research
Optimal and approximate Q-value functions for decentralized POMDPs
Journal of Artificial Intelligence Research
Offline Planning for Communication by Exploiting Structured Interactions in Decentralized MDPs
WI-IAT '09 Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology - Volume 02
A bilinear programming approach for multiagent planning
Journal of Artificial Intelligence Research
Efficient and distributable methods for solving the multiagent plan coordination problem
Multiagent and Grid Systems - Planning in multiagent systems
Performance evaluation of DPS coordination strategies modelled in pi-calculus
International Journal of Intelligent Information and Database Systems
Self-organization for coordinating decentralized reinforcement learning
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Online planning for multi-agent systems with bounded communication
Artificial Intelligence
Decentralized MDPs with sparse interactions
Artificial Intelligence
Solving efficiently Decentralized MDPs with temporal and resource constraints
Autonomous Agents and Multi-Agent Systems
Deadlock verification of a DPS coordination strategy and its alternative model in pi-calculus
International Journal of Intelligent Information and Database Systems
Heuristic search of multiagent influence space
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Planning and evaluating multiagent influences under reward uncertainty
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
QueryPOMDP: POMDP-based communication in multiagent systems
EUMAS'11 Proceedings of the 9th European conference on Multi-Agent Systems
Modeling information exchange opportunities for effective human-computer teamwork
Artificial Intelligence
Multiagent POMDPs with asynchronous execution
Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
Incremental clustering and expansion for faster optimal planning in decentralized POMDPs
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
Decentralized MDPs provide a powerful formal framework for planning in multi-agent systems, but the complexity of the model limits its usefulness. We study in this paper a class of DEC-MDPs that restricts the interactions between the agents to a structured, event-driven dependency. These dependencies can model locking a shared resource or temporal enabling constraints, both of which arise frequently in practice. The complexity of this class of problems is shown to be no harder than exponential in the number of states and doubly exponential in the number of dependencies. Since the number of dependencies is much smaller than the number of states for many problems, this is significantly better than the doubly exponential (in the state space) complexity of DEC-MDPs. We also demonstrate how an algorithm we previously developed can be used to solve problems in this class both optimally and approximately. Experimental work indicates that this solution technique is significantly faster than a naive policy search approach.