The dynamics of reinforcement learning in cooperative multiagent systems
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Multiagent learning using a variable learning rate
Artificial Intelligence
Multiagent Reinforcement Learning: Theoretical Framework and an Algorithm
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
An Algorithm for Distributed Reinforcement Learning in Cooperative Multi-Agent Systems
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Reinforcement learning of coordination in cooperative multi-agent systems
Eighteenth national conference on Artificial intelligence
Reinforcement Learning of Coordination in Heterogeneous Cooperative Multi-Agent Systems
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 3
Hi-index | 0.00 |
Can a good learner compensate for a poor learner when paired in a coordination game? Previous work presented an example where a special learning algorithm (FMQ) is capable of doing just that when paired with a specific less capable algorithm even in games which stump the poorer algorithm when paired with itself. We argue that this result is not general. We give a straightforward extension to the coordination game in which FMQ cannot compensate for the lesser algorithm. We also provide other problematic pairings, and argue that another high-quality algorithm cannot do so either.