The BOID architecture: conflicts between beliefs, obligations, intentions and desires
Proceedings of the fifth international conference on Autonomous agents
ATAL '98 Proceedings of the 5th International Workshop on Intelligent Agents V, Agent Theories, Architectures, and Languages
Deliberative Normative Agents: Principles and Architecture
ATAL '99 6th International Workshop on Intelligent Agents VI, Agent Theories, Architectures, and Languages (ATAL),
Towards Socially Sophisticated BDI Agents
ICMAS '00 Proceedings of the Fourth International Conference on MultiAgent Systems (ICMAS-2000)
A normative framework for agent-based systems
Computational & Mathematical Organization Theory
Computational & Mathematical Organization Theory
Programming Agents with Emotions
Proceedings of the 2006 conference on ECAI 2006: 17th European Conference on Artificial Intelligence August 29 -- September 1, 2006, Riva del Garda, Italy
Reasoning about Norm Compliance with Rational Agents
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
Normative deliberation in graded BDI agents
MATES'10 Proceedings of the 8th German conference on Multiagent system technologies
Norm internalization in artificial societies
AI Communications - European Workshop on Multi-Agent Systems (EUMAS) 2009
A graded BDI agent model to represent and reason about preferences
Artificial Intelligence
Emotions in autonomous agents: comparative analysis of mechanisms and functions
Autonomous Agents and Multi-Agent Systems
Open issues for normative multi-agent systems
AI Communications
Using norms to control open multi-agent systems
AI Communications
Hi-index | 0.07 |
One of the main goals of the agent community is to provide a trustworthy technology that allows humans to delegate some specific tasks to software agents. Frequently, laws and social norms regulate these tasks. As a consequence agents need mechanisms for reasoning about these norms similarly to the user that has delegated the task to them. Specifically, agents should be able to balance these norms against their internal motivations before taking action. In this paper, we propose a human-inspired model for making decisions about norm compliance based on three different factors: self-interest, enforcement mechanisms and internalized emotions. Different agent personalities can be defined according to the importance given to each factor. These personalities have been experimentally compared and the results are shown in this article.