Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Bayesian Networks and Decision Graphs
Bayesian Networks and Decision Graphs
Properties of Sensitivity Analysis of Bayesian Belief Networks
Annals of Mathematics and Artificial Intelligence
Making Sensitivity Analysis Computationally Efficient
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Analysing Sensitivity Data from Probabilistic Networks
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
A distance measure for bounding probabilistic belief change
Eighteenth national conference on Artificial intelligence
Evidence-invariant sensitivity bounds
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
A Quantitative Study of the Effect of Missing Data in Classifiers
CIT '05 Proceedings of the The Fifth International Conference on Computer and Information Technology
Sensitivity analysis in discrete Bayesian networks
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Rotation-based model trees for classification
International Journal of Data Analysis Techniques and Strategies
Multi-dimensional classification with Bayesian networks
International Journal of Approximate Reasoning
Efficient sensitivity analysis in hidden markov models
International Journal of Approximate Reasoning
International Journal of Approximate Reasoning
Hi-index | 0.00 |
Empirical evidence shows that naive Bayesian classifiers perform quite well compared to more sophisticated classifiers, even in view of inaccuracies in their parameters. In this paper, we study the effects of such parameter inaccuracies by investigating the sensitivity functions of a naive Bayesian network. We show that, as a consequence of the network's independence properties, these sensitivity functions are highly constrained. We further investigate whether the patterns of sensitivity that follow from these functions support the observed robustness of naive Bayesian classifiers. In addition to standard sensitivities given available evidence, we also study the effect of parameter inaccuracies in view of scenarios of additional evidence. We show that standard sensitivity functions suffice to describe such scenario sensitivities.