Supervised interaction: creating a web of trust for contracting agents in electronic environments
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
Implementing responsibility for states and events
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Constraint rule-based programming of norms for electronic institutions
Autonomous Agents and Multi-Agent Systems
Norms Evaluation through Reputation Mechanisms for BDI Agents
Proceedings of the 2010 conference on Artificial Intelligence Research and Development: Proceedings of the 13th International Conference of the Catalan Association for Artificial Intelligence
Hi-index | 0.00 |
Agents based on reactive planning architectures use pre-specified plans as behaviour specifications. Normative agents are motivated by norms in their behaviour: obligations motivate them to act, prohibitions motivate them to refrain from certain actions, and permissions (or privileges) and capabilities specify the range of possible actions for such an agent. An important issue for normative agents is: under what circumstances is it appropriate for an agent to adopt a new set of obligations, prohibitions or permissions and what effect does it have on the agent's normative state? In answering this question, a critical issue is whether or not this set of norms is consistent with the agent's current normative state. Consistency of a set of norms is discussed in detail in respect to a reactive planning agent architecture -- NoA, but, it is argued, provides useful insight to the problem of norm consistency in general and particularly where practical reasoning agents are concerned.