Semi-naive Bayesian classifier
EWSL-91 Proceedings of the European working session on learning on Machine learning
Original Contribution: Stacked generalization
Neural Networks
Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Machine Learning - Special issue on learning with probabilistic representations
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Lazy Learning of Bayesian Rules
Machine Learning
Induction of Recursive Bayesian Classifiers
ECML '93 Proceedings of the European Conference on Machine Learning
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
The Need for Low Bias Algorithms in Classification Learning from Large Data Sets
PKDD '02 Proceedings of the 6th European Conference on Principles of Data Mining and Knowledge Discovery
Adjusted Probability Naive Bayesian Induction
AI '98 Selected papers from the 11th Australian Joint Conference on Artificial Intelligence on Advanced Topics in Artificial Intelligence
Candidate Elimination Criteria for Lazy Bayesian Rules
AI '01 Proceedings of the 14th Australian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
SNNB: A Selective Neighborhood Based Naïve Bayes for Lazy Learning
PAKDD '02 Proceedings of the 6th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining
Comparison of Lazy Bayesian Rule and Tree-Augmented Bayesian Learning
ICDM '02 Proceedings of the 2002 IEEE International Conference on Data Mining
Hi-index | 0.00 |
Of numerous proposals to improve the accuracy of naive Bayes by weakening its attribute independence assumption, both LBR and Super-Parent TAN have demonstrated remarkable error performance. However, both techniques obtain this outcome at a considerable computational cost. We present a new approach to weakening the attribute independence assumption by averaging all of a constrained class of classifiers. In extensive experiments this technique delivers comparable prediction accuracy to LBR and Super-Parent TAN with substantially improved computational efficiency at test time relative to the former and at training time relative to the latter. The new algorithm is shown to have low variance and is suited to incremental learning.