Proceedings of the workshop on Deception, Fraud, and Trust in Agent Societies held during the Autonomous Agents Conference: Trust in Cyber-societies, Integrating the Human and Artificial Perspectives
Implicit Negotiation in Repeated Games
ATAL '01 Revised Papers from the 8th International Workshop on Intelligent Agents VIII
Learning in multi-agent systems
The Knowledge Engineering Review
Cooperative Multi-Agent Learning: The State of the Art
Autonomous Agents and Multi-Agent Systems
Predicting and preventing coordination problems in cooperative Q-learning systems
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Value-function reinforcement learning in Markov games
Cognitive Systems Research
Hi-index | 0.00 |
Assumptions underlying the convergence proofs of reinforcement learning (RL) algorithms like Q-learning are violated when multiple interacting agents adapt their strategies on-line because of learning. Empirical investigations in several domains, however, have produced encouraging results. We evaluate the convergence behavior of concurrent reinforcement learning agents using game matrices as studied by Claus and Boutilier [1]. Variants of simple RL algorithms are evaluated for convergence under increasing number of agents per group, scale up of game matrix size, delayed feedback and game matrix characteristics. Our results show surprising departures from that observed by Claus and Boutilier, particular for larger problem sizes.