Modelling social action for AI agents
Artificial Intelligence - Special issue: artificial intelligence 40 years later
Multiagent learning using a variable learning rate
Artificial Intelligence
Learning to Be Thoughtless: Social Norms and Individual Computation
Computational Economics
Collective Intelligence and Braess' Paradox
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Norm Governed Multiagent Systems: The Delegation of Control to Autonomous Agents
IAT '03 Proceedings of the IEEE/WIC International Conference on Intelligent Agent Technology
Implementing norms in electronic institutions
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Cooperative Multi-Agent Learning: The State of the Art
Autonomous Agents and Multi-Agent Systems
Evolutionary game theory and multi-agent reinforcement learning
The Knowledge Engineering Review
A rule-based approach to norm-oriented programming of electronic institutions
ACM SIGecom Exchanges
Autonomous Agents and Multi-Agent Systems
Introduction to the special issue on normative multiagent systems
Autonomous Agents and Multi-Agent Systems
Emergence of norms through social learning
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Topology and Memory Effect on Convention Emergence
WI-IAT '09 Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology - Volume 02
Hi-index | 0.00 |
Effective norms can significantly enhance performance of individual agents and agent societies. We consider individual agents that repeatedly interact over instances of a given scenario. Each interaction is framed as a stage game where multiple action combinations yield the same optimal payoff. An agent learns to play the game over repeated interactions with multiple, unknown, agents. The key research question is to find out whether a consistent norm emerges when all agents are learning at the same time. In real-life, agents may have pre-formed biases or preferences which may hinder or even preclude norm emergence. We study the success and speed of norm emergence when different subsets of the population have different initial biases. In particular we characterize the relative speed of norm emergence under varying biases and the success of majority/minority groups in enforcing their biases on the rest of the population given different bias strengths.