T&Aelig;MS: a framework for environment centered analysis and design of coordination mechanisms
Foundations of distributed artificial intelligence
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
The Complexity of Decentralized Control of Markov Decision Processes
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Networked distributed POMDPs: a synthesis of distributed constraint optimization and POMDPs
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 1
Decentralized control of cooperative systems: categorization and complexity analysis
Journal of Artificial Intelligence Research
Solving transition independent decentralized Markov decision processes
Journal of Artificial Intelligence Research
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Predictability & criticality metrics for coordination in complex environments
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 2
Commitment-based service coordination
International Journal of Agent-Oriented Software Engineering
Flexible approximation of structured interactions in decentralized Markov decision processes
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Efficient and distributable methods for solving the multiagent plan coordination problem
Multiagent and Grid Systems - Planning in multiagent systems
Commitment-based service coordination
SOCASE'08 Proceedings of the 2008 AAMAS international conference on Service-oriented computing: agents, semantics, and engineering
Coordination for uncertain outcomes using distributed neighbor exchange
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Hi-index | 0.00 |
Decentralized MDPs provide powerful models of interactions in multiagent environments, but are often very difficult or even computationally infeasible to solve optimally. Here we develop a hierarchical approach to solving a restricted set of decentralized MDPs. By forming commitments with other agents and modeling these concisely in their local MDPs, agents effectively, efficiently, and distributively formulate co-ordinated local policies. We introduce a novel construction that captures commitments as constraints on local policies and show how Linear Programming can be used to achieve local optimality subject to these constraints. In contrast to other commitment enforcement approaches, we show ours to be more robust in capturing the intended commitment semantics while maximizing local utility. We also describe a commitment-space heuristic search algorithm that can be used to approximate optimal joint policies. A preliminary empirical evaluation suggests that our approach yields faster approximate solutions than the conventional encoding of the problem as a multiagent MDP would allow and, when wrapped in an exhaustive commitment-space search, will find the optimal global solution.