Arbitrarily Tight Upper and Lower Bounds on the Bayesian Probability of Error
IEEE Transactions on Pattern Analysis and Machine Intelligence
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
A Tight Upper Bound on the Bayesian Probability of Error
IEEE Transactions on Pattern Analysis and Machine Intelligence
An analysis of Bayesian classifiers
AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
Hi-index | 0.00 |
Here we revisit the Naïve Bayes Classifier (NB). A problem from veterinary medicine with assumed independent features led us to look once again at this model. The effectiveness of NB despite violation of the independence assumption is still open for discussion. In this study we try to develop a bound relating dependency level of features and the classification error of Naïve Bayes. As dependency between more than two features is difficult to define and express analytically, we consider a simple two class two feature example problem. Using simulations we established empirical bounds measured by Yules Q-statistic between calculable error and error related to the true distribution.