Bayesian and non-Bayesian evidential updating
Artificial Intelligence
Some problems for convex Bayesians
UAI '92 Proceedings of the eighth conference on Uncertainty in Artificial Intelligence
Calculating uncertainty intervals from conditional convex sets of probabilities
UAI '92 Proceedings of the eighth conference on Uncertainty in Artificial Intelligence
Learning Bayesian Networks
Probabilistic abduction without priors
International Journal of Approximate Reasoning
International Journal of Approximate Reasoning
Editorial: The imprecise Dirichlet model
International Journal of Approximate Reasoning
Representation insensitivity in immediate prediction under exchangeability
International Journal of Approximate Reasoning
Learning in games using the imprecise Dirichlet model
International Journal of Approximate Reasoning
International Journal of Approximate Reasoning
Updating coherent previsions on finite spaces
Fuzzy Sets and Systems
Limits of learning about a categorical latent variable under prior near-ignorance
International Journal of Approximate Reasoning
An introduction to the imprecise Dirichlet model for multinomial data
International Journal of Approximate Reasoning
Combination of upper and lower probabilities
UAI'91 Proceedings of the Seventh conference on Uncertainty in Artificial Intelligence
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Belief function and multivalued mapping robustness in statistical estimation
International Journal of Approximate Reasoning
Determining dependence relations using a new score based on imprecise probabilities
Intelligent Data Analysis
Hi-index | 0.00 |
This paper distinguishes between objective probability-or chance-and subjective probability. Most statistical methods in machine learning are based on the hypothesis that there is a random experiment from which we get a set of observations. This random experiment could be identified with a chance or objective probability, but these probabilities depend on some unknown parameters. Our knowledge of these parameters is not objective and in order to learn about them, we must assess some epistemic probabilities about their values. In some cases, our objective knowledge about these parameters is vacuous, so the question is: What epistemic probabilities should be assumed? In this paper we argue for the assumption of non-vacuous (a proper subset of [0,1]) interval probabilities. There are several reasons for this; some are based on the betting interpretation of epistemic probabilities while others are based on the learning capabilities under the vacuous representation. The implications of the selection of epistemic probabilities in different concepts as conditioning and learning are studied. It is shown that in order to maintain some reasonable learning capabilities we have to assume more informative prior models than those frequently used in the literature, such as the imprecise Dirichlet model.