Induction: processes of inference, learning, and discovery
Induction: processes of inference, learning, and discovery
Technical Note: \cal Q-Learning
Machine Learning
The dynamics of reinforcement learning in cooperative multiagent systems
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Cooperation without memory or space: tags, groups and the prisoner's dilemma
MABS 2000 Proceedings of the second international workshop on Multi-agent based simulation
Multiagent learning using a variable learning rate
Artificial Intelligence
Multiagent Reinforcement Learning: Theoretical Framework and an Algorithm
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Reinforcement learning of coordination in cooperative multi-agent systems
Eighteenth national conference on Artificial intelligence
Evolving social rationality for MAS using "tags"
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Efficient learning equilibrium
Artificial Intelligence
Effective tag mechanisms for evolving coordination
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Tag Mechanisms Evaluated for Coordination in Open Multi-Agent Systems
Engineering Societies in the Agents World VIII
Effective tag mechanisms for evolving cooperation
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Emergence of norms through social learning
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Strategy and Fairness in Repeated Two-agent Interaction
ICTAI '10 Proceedings of the 2010 22nd IEEE International Conference on Tools with Artificial Intelligence - Volume 02
Learning to Achieve Social Rationality Using Tag Mechanism in Repeated Interactions
ICTAI '11 Proceedings of the 2011 IEEE 23rd International Conference on Tools with Artificial Intelligence
An overview of cooperative and competitive multiagent learning
LAMAS'05 Proceedings of the First international conference on Learning and Adaption in Multi-Agent Systems
The success and failure of tag-mediated evolution of cooperation
LAMAS'05 Proceedings of the First international conference on Learning and Adaption in Multi-Agent Systems
Socially intelligent reasoning for autonomous agents
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Social instruments for robust convention emergence
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume One
Learning to achieve socially optimal solutions in general-sum games
PRICAI'12 Proceedings of the 12th Pacific Rim international conference on Trends in Artificial Intelligence
Self-Organising Common-Pool Resource Allocation and Canons of Distributive Justice
SASO '12 Proceedings of the 2012 IEEE Sixth International Conference on Self-Adaptive and Self-Organizing Systems
Hi-index | 0.00 |
In multiagent systems, social optimality is a desirable goal to achieve in terms of maximizing the global efficiency of the system. We study the problem of coordinating on socially optimal outcomes among a population of agents, in which each agent randomly interacts with another agent from the population each round. Previous work [Hales and Edmonds 2003; Matlock and Sen 2007, 2009] mainly resorts to modifying the interaction protocol from random interaction to tag-based interactions and only focus on the case of symmetric games. Besides, in previous work the agents’ decision making processes are usually based on evolutionary learning, which usually results in high communication cost and high deviation on the coordination rate. To solve these problems, we propose an alternative social learning framework with two major contributions as follows. First, we introduce the observation mechanism to reduce the amount of communication required among agents. Second, we propose that the agents’ learning strategies should be based on reinforcement learning technique instead of evolutionary learning. Each agent explicitly keeps the record of its current state in its learning strategy, and learn its optimal policy for each state independently. In this way, the learning performance is much more stable and also it is suitable for both symmetric and asymmetric games. The performance of this social learning framework is extensively evaluated under the testbed of two-player general-sum games comparing with previous work [Hao and Leung 2011; Matlock and Sen 2007]. The influences of different factors on the learning performance of the social learning framework are investigated as well.