HTN planning: complexity and expressivity
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
Communication decisions in multi-agent cooperation: model and experiments
Proceedings of the fifth international conference on Autonomous agents
The Complexity of Decentralized Control of Markov Decision Processes
Mathematics of Operations Research
The communicative multiagent team decision problem: analyzing teamwork theories and models
Journal of Artificial Intelligence Research
Decentralized control of cooperative systems: categorization and complexity analysis
Journal of Artificial Intelligence Research
Solving transition independent decentralized Markov decision processes
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
In multiagent planning, it is often convenient to view a problem as two subproblems: agent local planning and coordination. Thus, we can classify agent activities into two categories: agent local problem solving activities and coordination activities, with each category of activities addressing the corresponding subproblem. However, recent mathematical models, such as decentralized Markov decision processes (DEC-MDP) and partially observable Markov decision processes (DEC-POMDP), view the problem as a single decision process and do not make the distinctions between agent local planning and coordination. In this paper, we present a synergistic representation that brings these two views together, and show that these two views are equivalent. Under this representation, traditional plan coordination mechanisms can be conveniently modeled and interpreted as approximation methods for solving the decision processes.