A model for reasoning about persistence and causation
Computational Intelligence
Stochastic dynamic programming with factored representations
Artificial Intelligence
The Complexity of Decentralized Control of Markov Decision Processes
Mathematics of Operations Research
Context-specific multiagent coordination and planning with factored MDPs
Eighteenth national conference on Artificial intelligence
Optimizing information exchange in cooperative multi-agent systems
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Communication for Improving Policy Computation in Distributed POMDPs
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 3
Reasoning about joint beliefs for execution-time communication decisions
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
The communicative multiagent team decision problem: analyzing teamwork theories and models
Journal of Artificial Intelligence Research
Decentralized control of cooperative systems: categorization and complexity analysis
Journal of Artificial Intelligence Research
SPUDD: stochastic planning using decision diagrams
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Distributed planning in hierarchical factored MDPs
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Context-specific independence in Bayesian networks
UAI'96 Proceedings of the Twelfth international conference on Uncertainty in artificial intelligence
Exploiting locality of interaction in factored Dec-POMDPs
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Interaction-driven Markov games for decentralized multiagent planning under uncertainty
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Introducing Communication in Dis-POMDPs with Locality of Interaction
WI-IAT '08 Proceedings of the 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Volume 02
Network Distributed POMDP with Communication
New Frontiers in Artificial Intelligence
Learning of coordination: exploiting sparse interactions in multiagent systems
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Exploiting locality of interactions using a policy-gradient approach in multiagent learning
Proceedings of the 2008 conference on ECAI 2008: 18th European Conference on Artificial Intelligence
Optimal and approximate Q-value functions for decentralized POMDPs
Journal of Artificial Intelligence Research
Introducing communication in Dis-POMDPs with locality of interaction
Web Intelligence and Agent Systems
Online planning for multi-agent systems with bounded communication
Artificial Intelligence
Decentralized MDPs with sparse interactions
Artificial Intelligence
Communications of the ACM
Multiagent decision by partial evaluation
Canadian AI'12 Proceedings of the 25th Canadian conference on Advances in Artificial Intelligence
Exploiting independent relationships in multiagent systems for coordinated learning
PRICAI'12 Proceedings of the 12th Pacific Rim international conference on Trends in Artificial Intelligence
QueryPOMDP: POMDP-based communication in multiagent systems
EUMAS'11 Proceedings of the 9th European conference on Multi-Agent Systems
Modeling information exchange opportunities for effective human-computer teamwork
Artificial Intelligence
Incremental clustering and expansion for faster optimal planning in decentralized POMDPs
Journal of Artificial Intelligence Research
Hi-index | 0.02 |
In many cooperative multiagent domains, there exist some states in which the agents can act independently and others in which they need to coordinate with their teammates. In this paper, we explore how factored representations of state can be used to generate factored policies that can, with minimal communication, be executed distributedly by a multiagent team. The factored policies indicate those portions of the state where no coordination is necessary, automatically alert the agents when they reach a state in which they do need to coordinate, and determine what the agents should communicate in order to achieve this coordination. We evaluate the success of our approach experimentally by comparing the amount of communication needed by a team executing a factored policy to a team that needs to communicate in every timestep.