SNNB: A Selective Neighborhood Based Naïve Bayes for Lazy Learning
PAKDD '02 Proceedings of the 6th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining
Not So Naive Bayes: Aggregating One-Dependence Estimators
Machine Learning
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
A decision tree-based attribute weighting filter for naive Bayes
Knowledge-Based Systems
International Journal of Computing Science and Mathematics
Semi-naive Exploitation of One-Dependence Estimators
ICDM '09 Proceedings of the 2009 Ninth IEEE International Conference on Data Mining
Weightily averaged one-dependence estimators
PRICAI'06 Proceedings of the 9th Pacific Rim international conference on Artificial intelligence
Interpolated differential evolution for global optimisation problems
International Journal of Computing Science and Mathematics
Lazy averaged one-dependence estimators
AI'06 Proceedings of the 19th international conference on Advances in Artificial Intelligence: Canadian Society for Computational Studies of Intelligence
A novel classification learning framework based on estimation of distribution algorithms
International Journal of Computing Science and Mathematics
Hi-index | 0.00 |
Naïve Bayes NB is a probability-based classification model based on the conditional independence assumption. However, in many real-world applications, this assumption is often violated. Responding to this fact, superparent-one-dependence estimators SPODEs weaken the attribute independence assumption by using each attribute of the database as the superparent. Aggregating one-dependence estimators AODEs, which estimates the corresponding parameters for every SPODE, has been proved to be one of the most efficient models due to its high accuracy among those improvements for NB classifier. This paper investigates a novel approach to ensemble the single SPODE based on the boosting strategy, Boosting for superparent-one-dependence estimators, simply, BODE. BODE first endows every instance a weight, and then find an optimal SPODE with highest accuracy in each iteration as a weak classifier. By doing so, BODE boosts all the selected weak classifiers to do the classification in the test processing. Experiments on UCI datasets demonstrate the algorithm performance.