Causality and maximum entropy updating
International Journal of Approximate Reasoning
An introduction to possibilistic and fuzzy logics
Readings in uncertain reasoning
Elements of information theory
Elements of information theory
Knowledge, probability, and adversaries
Journal of the ACM (JACM)
Modeling belief in dynamic systems, part I: foundations
Artificial Intelligence
Modeling belief in dynamic systems part II: revision and update
Journal of Artificial Intelligence Research
Probability update: conditioning vs. cross-entropy
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Conditional Probability Meets Update Logic
Journal of Logic, Language and Information
Updating beliefs with incomplete observations
Artificial Intelligence
CONCUR 2005 - Concurrency Theory
ACM Transactions on Information and System Security (TISSEC)
Belief Revision through Forgetting Conditionals in Conditional Probabilistic Logic Programs
Proceedings of the 2008 conference on ECAI 2008: 18th European Conference on Artificial Intelligence
Probabilistic and nondeterministic aspects of anonymity
Theoretical Computer Science
Revising imprecise probabilistic beliefs in the framework of probabilistic logic programming
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 1
Ignorability in statistical and probabilistic inference
Journal of Artificial Intelligence Research
Conservative inference rule for uncertain reasoning under incompleteness
Journal of Artificial Intelligence Research
Partial identification with missing data: concepts and findings
International Journal of Approximate Reasoning
Jeffrey's rule of conditioning in a possibilistic framework
Annals of Mathematics and Artificial Intelligence
Probabilistic space partitioning in constraint logic programming
ASIAN'04 Proceedings of the 9th Asian Computing Science conference on Advances in Computer Science: dedicated to Jean-Louis Lassez on the Occasion of His 5th Cycle Birthday
Safe probability: restricted conditioning and extended marginalization
ECSQARU'13 Proceedings of the 12th European conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Hi-index | 0.00 |
As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a "naive space", which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR ("coarsening at random") in the statistical literature characterizes when "naive" conditioning in a naive space works. We show that the CAR condition holds rather infrequently, and we provide a procedural characterization of it, by giving a randomized algorithm that generates all and only distributions for which CAR holds. This substantially extends previous characterizations of CAR. We also consider more generalized notions of update such as Jeffrey conditioning and minimizing relative entropy (MRE). We give a generalization of the CAR condition that characterizes when Jeffrey conditioning leads to appropriate answers, and show that there exist some very simple settings in which MRE essentially never gives the right results. This generalizes and interconnects previous results obtained in the literature on CAR and MRE.