A Bayesian method for constructing Bayesian belief networks from databases
Proceedings of the seventh conference (1991) on Uncertainty in artificial intelligence
Machine Learning - Special issue on learning with probabilistic representations
MultiBoosting: A Technique for Combining Boosting and Wagging
Machine Learning
Lazy Learning of Bayesian Rules
Machine Learning
Scientific Computing
On Bias, Variance, 0/1—Loss, and the Curse-of-Dimensionality
Data Mining and Knowledge Discovery
Semi-Naive Bayesian Classifier
EWSL '91 Proceedings of the European Working Session on Machine Learning
Induction of Recursive Bayesian Classifiers
ECML '93 Proceedings of the European Conference on Machine Learning
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Adjusted Probability Naive Bayesian Induction
AI '98 Selected papers from the 11th Australian Joint Conference on Artificial Intelligence on Advanced Topics in Artificial Intelligence
Candidate Elimination Criteria for Lazy Bayesian Rules
AI '01 Proceedings of the 14th Australian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
SNNB: A Selective Neighborhood Based Naïve Bayes for Lazy Learning
PAKDD '02 Proceedings of the 6th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining
Bayesian Artificial Intelligence
Bayesian Artificial Intelligence
Not So Naive Bayes: Aggregating One-Dependence Estimators
Machine Learning
Classifying Requirements: Towards a More Rigorous Analysis of Natural-Language Specifications
ISSRE '05 Proceedings of the 16th IEEE International Symposium on Software Reliability Engineering
Efficient lazy elimination for averaged one-dependence estimators
ICML '06 Proceedings of the 23rd international conference on Machine learning
Ensemble selection for superparent-one-dependence estimators
AI'05 Proceedings of the 18th Australian Joint conference on Advances in Artificial Intelligence
Robust bayesian linear classifier ensembles
ECML'05 Proceedings of the 16th European conference on Machine Learning
IEEE Transactions on Knowledge and Data Engineering
Discriminatively Learning Selective Averaged One-Dependence Estimators Based on Cross-Entropy Method
Computational Intelligence and Security
Finding the Right Family: Parent and Child Selection for Averaged One-Dependence Estimators
ECML '07 Proceedings of the 18th European conference on Machine Learning
Hybrid Hierarchical Classifiers for Hyperspectral Data Analysis
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
Defect prediction from static code features: current results, limitations, new approaches
Automated Software Engineering
Hi-index | 0.00 |
An ensemble of Super-Parent-One-Dependence Estimators (SPODEs) offers a powerful yet simple alternative to naive Bayes classifiers, achieving significantly higher classification accuracy at a moderate cost in classification efficiency. Currently there exist two families of methodologies that ensemble candidate SPODEs for classification. One is to select only helpful SPODEs and uniformly average their probability estimates, a type of model selection. Another is to assign a weight to each SPODE and linearly combine their probability estimates, a methodology named model weighing. This paper presents a theoretical and empirical study comparing model selection and model weighing for ensembling SPODEs. The focus is on maximizing the ensemble's classification accuracy while minimizing its computational time. A number of representative selection and weighing schemes are studied, providing a comprehensive research on this topic and identifying effective schemes that provide alternative trades-off between speed and expected error.