An introduction to possibilistic and fuzzy logics
Readings in uncertain reasoning
Elements of information theory
Elements of information theory
Knowledge, probability, and adversaries
Journal of the ACM (JACM)
Modeling belief in dynamic systems, part I: foundations
Artificial Intelligence
Strong Entropy Concentration, Game Theory, and Algorithmic Randomness
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Modeling belief in dynamic systems part II: revision and update
Journal of Artificial Intelligence Research
Probability update: conditioning vs. cross-entropy
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Updating with incomplete observations
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Hi-index | 0.01 |
As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a "naive space", which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR ("coarsening at random") in the statistical literature characterizes when "naive" conditioning in a naive space works. We show that the CAR condition holds rather infrequently. We then consider more generalized notions of update such as Jeffrey conditioning and minimizing relative entropy (MRE). We give a generalization of the CAR condition that characterizes when Jeffrey conditioning leads to appropriate answers, but show that there are no such conditions for MRE. This generalizes and interconnects previous results obtained in the literature on CAR and MRE.