The complexity of Markov decision processes
Mathematics of Operations Research
Communication decisions in multi-agent cooperation: model and experiments
Proceedings of the fifth international conference on Autonomous agents
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Sequential Optimality and Coordination in Multiagent Systems
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
The Complexity of Decentralized Control of Markov Decision Processes
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Uncertainty handling and decision making in multi-agent cooperation
Uncertainty handling and decision making in multi-agent cooperation
Transition-independent decentralized markov decision processes
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Performance models for large scale multiagent systems: using distributed POMDP building blocks
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Minimizing communication cost in a distributed Bayesian network using a decentralized MDP
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Evolution of the GPGP/TÆMS Domain-Independent Coordination Framework
Autonomous Agents and Multi-Agent Systems
Decentralized Markov Decision Processes with Event-Driven Interactions
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 1
Towards a Formalization of Teamwork with Resource Constraints
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 2
Reasoning about joint beliefs for execution-time communication decisions
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Agent interaction in distributed POMDPs and its implications on complexity
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Reward shaping for valuing communications during multi-agent coordination
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Solving transition independent decentralized Markov decision processes
Journal of Artificial Intelligence Research
Hybrid BDI-POMDP framework for multiagent teaming
Journal of Artificial Intelligence Research
Offline Planning for Communication by Exploiting Structured Interactions in Decentralized MDPs
WI-IAT '09 Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology - Volume 02
Two decades of multiagent teamwork research: past, present, and future
CARE@AI'09/CARE@IAT'10 Proceedings of the CARE@AI 2009 and CARE@IAT 2010 international conference on Collaborative agents - research and development
Teamwork in distributed POMDPs: execution-time coordination under model uncertainty
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
Towards Addressing Model Uncertainty: Robust Execution-Time Coordination for Teamwork
WI-IAT '11 Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Volume 02
Coordinating teams in uncertain environments: a hybrid BDI-POMDP approach
ProMAS'04 Proceedings of the Second international conference on Programming Multi-Agent Systems
Multiagent meta-level control for radar coordination
Web Intelligence and Agent Systems
Hi-index | 0.00 |
In this paper we divide multi-agent policies into two categories: centralized ones and decentralized ones. They reflect different views of multi-agent systems and different decision-theoretic underpinnings. While the centralized policies specify the decision of the agents according to the global system state, the decentralized policies, which correspond to the decisions of situated agents, must assume only a partial knowledge of the system in each agent and must deal with communication explicitly. In this paper we relate these two types of policies by introducing a formal and systematic methodology for transforming centralized policies into a variety of decentralized policies. We introduce a set of transformation strategies, and provide a representation for discussing decentralized communication decisions. Through our experiments, we show that our methodology enables us to derive a class of interesting policies that have a range of expected utilities and amount of communication, and allows us to gain important insights into decentralized coordination strategies from a decision-theoretic perspective.