Machine Learning
Machine Learning - Special issue on learning with probabilistic representations
Extending naïve Bayes classifiers using long itemsets
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Lazy Learning of Bayesian Rules
Machine Learning
Fast Algorithms for Mining Association Rules in Large Databases
VLDB '94 Proceedings of the 20th International Conference on Very Large Data Bases
Not So Naive Bayes: Aggregating One-Dependence Estimators
Machine Learning
A Double Layer Bayesian Classifier
FSKD '07 Proceedings of the Fourth International Conference on Fuzzy Systems and Knowledge Discovery - Volume 01
A Smart Design for the TAN Classifier
WKDD '09 Proceedings of the 2009 Second International Workshop on Knowledge Discovery and Data Mining
HODE: Hidden One-Dependence Estimator
ECSQARU '09 Proceedings of the 10th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
A Novel Bayes Model: Hidden Naive Bayes
IEEE Transactions on Knowledge and Data Engineering
Weightily averaged one-dependence estimators
PRICAI'06 Proceedings of the 9th Pacific Rim international conference on Artificial intelligence
Hi-index | 0.00 |
Numerous models have been proposed to reduce the classification error of Naïve Bayes by weakening its attribute independence assumption and some have demonstrated remarkable error performance. Considering that ensemble learning is an effective method of reducing the classification error of the classifier, this paper proposes a double-layer Bayesian classifier ensembles (DLBCE) algorithm based on frequent itemsets. DLBCE constructs a double-layer Bayesian classifier (DLBC) for each frequent itemset the new instance contained and finally ensembles all the classifiers by assigning different weight to different classifier according to the conditional mutual information. The experimental results show that the proposed algorithm outperforms other outstanding algorithms.