Two views of belief: belief as generalized probability and belief as evidence
Artificial Intelligence
Revision rules for convex sets of probabilities
Mathematical models for handling partial knowledge in artificial intelligence
Updating Uncertain Information
IPMU '90 Proceedings of the 3rd International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems: Uncertainty in Knowledge Bases
Decision making with interval influence diagrams
UAI '90 Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence
A new approach to updating beliefs
UAI '90 Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence
Representation dependence in probabilistic inference
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Partially observable Markov decision processes with imprecise parameters
Artificial Intelligence
Belief Revision through Forgetting Conditionals in Conditional Probabilistic Logic Programs
Proceedings of the 2008 conference on ECAI 2008: 18th European Conference on Artificial Intelligence
Revising imprecise probabilistic beliefs in the framework of probabilistic logic programming
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 1
Graphical models for imprecise probabilities
International Journal of Approximate Reasoning
Probabilistic belief change: expansion, conditioning and constraining
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Making decisions using sets of probabilities: updating, time consistency, and calibration
Journal of Artificial Intelligence Research
Probabilistic Belief Contraction
Minds and Machines
Hi-index | 0.00 |
There are several well-known justifications for conditioning as the appropriate method for updating a single probability measure, given an observation. However, there is a significant body of work arguing for sets of probability measures, rather than single measures, as a more realistic model of uncertainty. Conditioning still makes sense in this context--we can simply condition each measure in the set individually, then combine the results--and, indeed, it seems to be the prel ferred updating procedure in the literature. But how justified is conditioning in this richer setting? Here we show, by considering an axiomatic account of conditioning given by van Fraassen, that the single-measure and sets-of-measures cases are very different. We show that van Fraassen's axiomatization for the former case is nowhere near sufficient for updating sets of measures. We give a considerably longer (and not as compelling) list of axioms that together force conditioning in this setting, and describe other update methods that are allowed once any of these axioms is dropped.