SOAR: an architecture for general intelligence
Artificial Intelligence
A knowledge level analysis of belief revision
Proceedings of the first international conference on Principles of knowledge representation and reasoning
AgentSpeak(L): BDI agents speak out in a logical computable language
MAAMAW '96 Proceedings of the 7th European workshop on Modelling autonomous agents in a multi-agent world : agents breaking away: agents breaking away
Analysing Rational Properties of Change Operators Based on Forward Chaining
ILPS '97 International Seminar on Logic Databases and the Meaning of Change, Transactions and Change in Logic Databases
Truth maintenance systems for problem solving
IJCAI'77 Proceedings of the 5th international joint conference on Artificial intelligence - Volume 1
Iterated theory base change: a computational model
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
AAAI'90 Proceedings of the eighth National conference on Artificial intelligence - Volume 2
Belief revision for AgentSpeak agents
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Belief Revision in a Fact-Rule Agent's Belief Base
KES-AMSTA '09 Proceedings of the Third KES International Symposium on Agent and Multi-Agent Systems: Technologies and Applications
Automating belief revision for agentspeak
DALT'06 Proceedings of the 4th international conference on Declarative Agent Languages and Technologies
Hi-index | 0.00 |
Agents need to be able to change their beliefs; in particular, they should be able to contract or remove a certain belief in order to restore consistency to their set of beliefs, and revise their beliefs by incorporating a new belief which may be inconsistent with their previous beliefs. An influential theory of belief change proposed by Alchourron, Gärdenfors and Makinson (AGM) [1] describes postulates which rational belief revision and contraction operations should satisfy. The AGM postulates are usually taken as characterising idealised rational reasoners, and the corresponding belief change operations are considered unsuitable for implementable agents due to their high computational cost [2]. The main result of this paper is to show that an efficient (linear time) belief contraction operation nevertheless satisfies all but one of the AGM postulates for contraction. This contraction operation is defined for an implementable rule-based agent which can be seen as a reasoner in a very weak logic; although the agent's beliefs are deductively closed with respect to this logic, checking consistency and tracing dependencies between beliefs is not computationally expensive. Finally, we give a non-standard definition of belief revision in terms of contraction for our agent.