Modelling social action for AI agents
Artificial Intelligence - Special issue: artificial intelligence 40 years later
The BOID architecture: conflicts between beliefs, obligations, intentions and desires
Proceedings of the fifth international conference on Autonomous agents
Supervised interaction: creating a web of trust for contracting agents in electronic environments
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
Constraining autonomy through norms
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2
Autonomous Agents and Multi-Agent Systems
Deliberative Normative Agents: Principles and Architecture
ATAL '99 6th International Workshop on Intelligent Agents VI, Agent Theories, Architectures, and Languages (ATAL),
ATAL '00 Proceedings of the 7th International Workshop on Intelligent Agents VII. Agent Theories Architectures and Languages
An investigation into reactive planning in complex domains
AAAI'87 Proceedings of the sixth National conference on Artificial intelligence - Volume 1
Reactive reasoning and planning
AAAI'87 Proceedings of the sixth National conference on Artificial intelligence - Volume 2
Hi-index | 0.00 |
A norm-governed agent takes social norms into account in its practical reasoning. Such norms characterise its role within a specific organisational context. By adopting a role, the agent commits to fulfil and adhere to the social norms associated with that role. These commitments require the agent to act in a way that does not violate any of its prohibitions or obligations. In adopting different sets of norms, an agent may experience conflicts between these norms as well as inconsistencies between possible actions for fulfilling its obligations and its currently adopted set of norms. In order to resolve such problems, it must be informed about conflicts and inconsistencies. The NoA architecture for norm-governed agents implements a computationally efficient mechanism for identifying and indicating such problems – possible candidates for action are assigned a specific label that contains cross-referenced information of actions and norms. As actions are indicated as problematic and not simply filtered out, the agent can still choose to either act according to its norms or against them. The labelling mechanism presented in this paper is therefore a critical step towards enabling an agent to reason about norm violations – the agent becomes norm-autonomous.