Planning and acting in partially observable stochastic domains
Artificial Intelligence
Communication decisions in multi-agent cooperation: model and experiments
Proceedings of the fifth international conference on Autonomous agents
A heuristic approach for solving decentralized-POMDP: assessment on the pursuit problem
Proceedings of the 2002 ACM symposium on Applied computing
Learning to Cooperate via Policy Search
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
The Complexity of Decentralized Control of Markov Decision Processes
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Transition-independent decentralized markov decision processes
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Optimizing information exchange in cooperative multi-agent systems
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Role allocation and reallocation in multiagent teams: towards a practical analysis
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
The communicative multiagent team decision problem: analyzing teamwork theories and models
Journal of Artificial Intelligence Research
Taming decentralized POMDPs: towards efficient policy computation for multiagent settings
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Protocol/Mechanism Design for Cooperation/Competition
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 1
Reasoning about joint beliefs for execution-time communication decisions
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Decentralized planning under uncertainty for teams of communicating agents
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Winning back the CUP for distributed POMDPs: planning over continuous belief spaces
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Communication management using abstraction in distributed Bayesian networks
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Exploiting factored representations for decentralized execution in multiagent teams
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Introducing Communication in Dis-POMDPs with Locality of Interaction
WI-IAT '08 Proceedings of the 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Volume 02
Network Distributed POMDP with Communication
New Frontiers in Artificial Intelligence
Multiagent Planning with Trembling-Hand Perfect Equilibrium in Multiagent POMDPs
Agent Computing and Multi-Agent Systems
Analyzing the performance of randomized information sharing
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Communication-based decomposition mechanisms for decentralized MDPs
Journal of Artificial Intelligence Research
Optimal and approximate Q-value functions for decentralized POMDPs
Journal of Artificial Intelligence Research
Offline Planning for Communication by Exploiting Structured Interactions in Decentralized MDPs
WI-IAT '09 Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology - Volume 02
Introducing Communication in Dis-POMDPs with Finite State Machines
WI-IAT '09 Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology - Volume 02
Myopic and Non-myopic Communication under Partial Observability
WI-IAT '09 Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology - Volume 02
A bilinear programming approach for multiagent planning
Journal of Artificial Intelligence Research
Point-based policy generation for decentralized POMDPs
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Introducing communication in Dis-POMDPs with locality of interaction
Web Intelligence and Agent Systems
Online planning for multi-agent systems with bounded communication
Artificial Intelligence
Solving efficiently Decentralized MDPs with temporal and resource constraints
Autonomous Agents and Multi-Agent Systems
Towards Addressing Model Uncertainty: Robust Execution-Time Coordination for Teamwork
WI-IAT '11 Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Volume 02
Coordinating teams in uncertain environments: a hybrid BDI-POMDP approach
ProMAS'04 Proceedings of the Second international conference on Programming Multi-Agent Systems
Decentralized multi-robot cooperation with auctioned POMDPs
International Journal of Robotics Research
Incremental clustering and expansion for faster optimal planning in decentralized POMDPs
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
Distributed Partially Observable Markov Decision Problems (POMDPs) are emerging as a popular approach for modeling multiagent teamwork where a group of agents work together to jointly maximize a reward function. Since the problem of finding the optimal joint policy for a distributed POMDP has been shown to be NEXP-Complete if no assumptions are made about the domain conditions, several locally optimal approaches have emerged as a viable solution. However, the use of communicative actions as part of these locally optimal algorithms has been largely ignored or has been applied only under restrictive assumptions about the domain. In this paper, we show how communicative acts can be explicitly introduced in order to find locally optimal joint policies that allow agents to coordinate better through synchronization achieved via communication. Furthermore, the introduction of communication allows us to develop a novel compact policy representation that results in savings of both space and time which are verified empirically. Finally, through the imposition of constraints on communication such as not going without communicating for more than K steps, even greater space and time savings can be obtained.