The learning power of belief revision
TARK '98 Proceedings of the 7th conference on Theoretical aspects of rationality and knowledge
Reasoning about Uncertainty
Conditional Doxastic Models: A Qualitative Approach to Dynamic Belief Revision
Electronic Notes in Theoretical Computer Science (ENTCS)
Learning by Erasing in Dynamic Epistemic Logic
LATA '09 Proceedings of the 3rd International Conference on Language and Automata Theory and Applications
Can doxastic agents learn? on the temporal structure of learning
LORI'09 Proceedings of the 2nd international conference on Logic, rationality and interaction
Finite identification from the viewpoint of epistemic update
Information and Computation
Hi-index | 0.00 |
We analyze the learning power of iterated belief revision methods, and in particular their universality: whether or not they can learn everything that can be learnt. We look in particular at three popular methods: conditioning, lexicographic revision and minimal revision. Our main result is that conditioning and lexicographic revision are universal on arbitrary epistemic states, provided that the observational setting is sound and complete (only true data are observed, and all true data are eventually observed) and provided that a non-standard (non-well-founded) prior plausibility relation is allowed. We show that a standard (well-founded) belief-revision setting is in general too narrow for this. We also show that minimal revision is not universal. Finally, we consider situations in which observational errors (false observations) may occur. Given a fairness condition (saying that only finitely many errors occur, and that every error is eventually corrected), we show that lexicographic revision is still universal in this setting, while the other two methods are not.