How to dynamically merge Markov decision processes
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Stochastic dynamic programming with factored representations
Artificial Intelligence
Multiagent teamwork: analyzing the optimality and complexity of key theories and models
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2
Computing Factored Value Functions for Policies in Structured MDPs
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Sequential Optimality and Coordination in Multiagent Systems
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Graphical Models for Game Theory
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Efficient solution algorithms for factored MDPs
Journal of Artificial Intelligence Research
Multi-agent influence diagrams for representing and solving games
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Resource allocation among agents with preferences induced by factored MDPs
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Approximate predictive state representations
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Partial Local FriendQ Multiagent Learning: Application to Team Automobile Coordination Problem
Proceedings of the 2006 conference on ECAI 2006: 17th European Conference on Artificial Intelligence August 29 -- September 1, 2006, Riva del Garda, Italy
Networked distributed POMDPs: a synthesis of distributed constraint optimization and POMDPs
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 1
Towards a unifying characterization for quantifying weak coupling in dec-POMDPs
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Partial local friendq multiagent learning: application to team automobile coordination problem
AI'06 Proceedings of the 19th international conference on Advances in Artificial Intelligence: Canadian Society for Computational Studies of Intelligence
Hi-index | 0.00 |
In multi-agent MDPs, it is generally necessary to consider the joint state space of all agents, making the size of the problem and the solution exponential in the number of agents. However, often interactions between the agents are only local, which suggests a more compact problem representation. We consider a subclass of multi-agent MDPs with local interactions where dependencies between agents are asymmetric, meaning that agents can affect others in a unidirectional manner. This asymmetry, which often occurs in domains with authority-driven relationships between agents, allows us to make better use of the locality of agentsý interactions. We present and analyze a graphical model of such problems and show that, for some classes of problems, it can be exploited to yield significant (sometimes exponential) savings in problem and solution size, as well as in computational efficiency of solution algorithms.