The Complexity of Decentralized Control of Markov Decision Processes
Mathematics of Operations Research
Coordinated Reinforcement Learning
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Planning, learning and coordination in multiagent decision processes
TARK '96 Proceedings of the 6th conference on Theoretical aspects of rationality and knowledge
Hierarchical multi-agent reinforcement learning
Autonomous Agents and Multi-Agent Systems
Exploiting factored representations for decentralized execution in multiagent teams
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Interaction-driven Markov games for decentralized multiagent planning under uncertainty
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Learning multi-agent state space representations
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Decentralized MDPs with sparse interactions
Artificial Intelligence
Coordinated learning for loosely coupled agents with sparse interactions
AI'11 Proceedings of the 24th international conference on Advances in Artificial Intelligence
A Comprehensive Survey of Multiagent Reinforcement Learning
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Hi-index | 0.00 |
Creating coordinated multiagent policies in an environment with uncertainties is a challenging issue in the research of multiagent learning. In this paper, a coordinated learning approach is proposed to enable agents to learn both individual policies and coordinated behaviors by exploiting independent relationships inherent in many multiagent systems. We illustrate how this approach is employed to solve coordination problems in robot navigation domains. Experimental results of different scales of domains prove the effectiveness of our learning approach.