C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning - Special issue on learning with probabilistic representations
Machine Learning
Not So Naive Bayes: Aggregating One-Dependence Estimators
Machine Learning
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Scaling up the accuracy of Bayesian classifier based on frequent itemsets by m-estimate
AICI'10 Proceedings of the 2010 international conference on Artificial intelligence and computational intelligence: Part I
Learning random forests for ranking
Frontiers of Computer Science in China
Hi-index | 0.00 |
In learning Bayesian network classifiers, estimating probabilities from a given set of training examples is crucial. In many cases, we can estimate probabilities by the fraction of times the events is observed to occur over the total number of opportunities. However, when the training examples are not enough, this probability estimation method inevitably suffers from the zero-frequency problem. To avoid this practical problem, Laplace estimate is usually used to estimate probabilities. Just as we all know, m-estimate is another probability estimation method. Thus, a natural question is whether a Bayesian network classifier with m-estimate can perform even better. Responding to this question, we single out a special m-estimate method and empirically investigate its effect on various Bayesian network classifiers, such as Naive Bayes (NB), Tree Augmented Naive Bayes (TAN), Averaged One-Dependence Estimators (AODE), and Hidden Naive Bayes (HNB). Our experiments show that the classifiers with our m-estimate perform better than the ones with Laplace estimate.