ICML '04 Proceedings of the twenty-first international conference on Machine learning
Interaction-driven Markov games for decentralized multiagent planning under uncertainty
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Learning multi-agent state space representations
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Exploiting independent relationships in multiagent systems for coordinated learning
PRICAI'12 Proceedings of the 12th Pacific Rim international conference on Trends in Artificial Intelligence
Hi-index | 0.00 |
Multiagent learning is a challenging problem in the area of multiagent systems because of the non-stationary environment caused by the interdependencies between agents. Learning for coordination becomes more difficult when agents do not know the structure of the environment and have only local observability. In this paper, an approach is proposed to enable autonomous agents to learn where and how to coordinate their behaviours in an environment where the interactions between agents are sparse. Our approach firstly adopts a statistical method to detect those states where coordination is most necessary. A Q-learning based coordination mechanism is then applied to coordinate agents' behaviours based on their local observability of the environment. We test our approach in grid world domains to show its good performance.