Propositional knowledge base revision and minimal change
Artificial Intelligence
Qualitative probabilities: a normative framework for commonsense reasoning
Qualitative probabilities: a normative framework for commonsense reasoning
On the logic of iterated belief revision
Artificial Intelligence
Changing Conditional Belief Unconditionally
Proceedings of the Sixth Conference on Theoretical Aspects of Rationality and Knowledge
Proceedings of the Workshop on The Logic of Theory Change
Dynamic belief revision operators
Artificial Intelligence
On the logic of iterated belief revision
TARK '94 Proceedings of the 5th conference on Theoretical aspects of reasoning about knowledge
TARK '96 Proceedings of the 6th conference on Theoretical aspects of rationality and knowledge
Revisions of knowledge systems using epistemic entrenchment
TARK '88 Proceedings of the 2nd conference on Theoretical aspects of reasoning about knowledge
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Iterated theory base change: a computational model
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Commonsense reasoning by distance semantics
TARK '07 Proceedings of the 11th conference on Theoretical aspects of rationality and knowledge
On the Dynamics of Total Preorders: Revising Abstract Interval Orders
ECSQARU '07 Proceedings of the 9th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Elaborating domain descriptions
Proceedings of the 2006 conference on ECAI 2006: 17th European Conference on Artificial Intelligence August 29 -- September 1, 2006, Riva del Garda, Italy
Conflicts between Relevance-Sensitive and Iterated Belief Revision
Proceedings of the 2008 conference on ECAI 2008: 18th European Conference on Artificial Intelligence
Admissible and restrained revision
Journal of Artificial Intelligence Research
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
The AGM postulates for belief revision, augmented by the DP postulates for iterated belief revision, provide generally accepted criteria for the design of operators by which intelligent agents adapt their beliefs incrementally to new information. These postulates alone, however, are too permissive: They support operators by which all newly acquired information is canceled as soon as an agent learns a fact that contradicts some of its current beliefs. In this paper, we present a formal analysis of the deficiency of the DP postulates, and we show how to solve the problem by an additional postulate of independence. We give a representation theorem for this postulate and prove that it is compatible with AGM and DP.