On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
A tutorial on learning with Bayesian networks
Learning in graphical models
On Estimating Probabilities in Tree Pruning
EWSL '91 Proceedings of the European Working Session on Machine Learning
Why Discretization Works for Naive Bayesian Classifiers
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Benchmarking Least Squares Support Vector Machine Classifiers
Machine Learning
Hi-index | 0.00 |
The Naive Bayesian (NB) classifiers have been one of the most popular techniques as basis of many classification applications both theoretically and practically. Our studies show that classification efficiencies, are very much dependent on the discretisation techniques, used in the Bayesian classifier, and formulation of such discretisation techniques therefore, becomes a critical issue. In this paper, we propose a novel discretisation technique whereby continuous attributes are divided into sufficient intervals, intersections of different class conditional density curves can be obtained and therefore, we are able to compute more precise approximations of the actual probability density as compared to traditional approaches. The Dirichlet prior assumption and its important property called perfect aggregation are presented to build a sound theoretical foundation for our methodology. Discussions on appropriate attribute divisions and the construction of new intervals have also been welldocumented. The developed technique is tested on UCI benchmark data sets. Results obtained are compared with other state-of-the-art techniques to illustrate the effectiveness of our new approach.