Fundamental concepts of qualitative probabilistic networks
Artificial Intelligence
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Machine Learning - Special issue on learning with probabilistic representations
A linear fit gets the correct monotonicity directions
Machine Learning
Verifying monotonicity of Bayesian networks with domain experts
International Journal of Approximate Reasoning
Bayesian Networks and Decision Graphs
Bayesian Networks and Decision Graphs
Learning Bayesian network parameters under order constraints
International Journal of Approximate Reasoning
Efficient reasoning in qualitative probabilistic networks
AAAI'93 Proceedings of the eleventh national conference on Artificial intelligence
Hi-index | 0.00 |
Naive Bayesian classifiers are used in a large range of application domains. These models generally show good performance despite their strong underlying assumptions. In this paper, we demonstrate however, by means of an example probability distribution, that a data set of instances can give rise to a classifier with counterintuitive behaviour. We will argue that such behaviour can be attributed to the learning algorithm having constructed incorrect directions of monotonicity for some of the feature variables involved. We will further show that conditions can be derived for the learning algorithm to retrieve the correct directions.