Machine Learning - Special issue on learning with probabilistic representations
Bayesian Artificial Intelligence
Bayesian Artificial Intelligence
Not So Naive Bayes: Aggregating One-Dependence Estimators
Machine Learning
Efficient lazy elimination for averaged one-dependence estimators
ICML '06 Proceedings of the 23rd international conference on Machine learning
IEEE Transactions on Knowledge and Data Engineering
HODE: Hidden One-Dependence Estimator
ECSQARU '09 Proceedings of the 10th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Scaling up the accuracy of Bayesian classifier based on frequent itemsets by m-estimate
AICI'10 Proceedings of the 2010 international conference on Artificial intelligence and computational intelligence: Part I
To select or to weigh: a comparative study of model selection and model weighing for SPODE ensembles
ECML'06 Proceedings of the 17th European conference on Machine Learning
Margin-based ordered aggregation for ensemble pruning
Pattern Recognition Letters
AusDM '12 Proceedings of the Tenth Australasian Data Mining Conference - Volume 134
Hi-index | 0.00 |
SuperParent-One-Dependence Estimators (SPODEs) loosen Naive-Bayes’ attribute independence assumption by allowing each attribute to depend on a common single attribute (superparent) in addition to the class. An ensemble of SPODEs is able to achieve high classification accuracy with modest computational cost. This paper investigates how to select SPODEs for ensembling. Various popular model selection strategies are presented. Their learning efficacy and efficiency are theoretically analyzed and empirically verified. Accordingly, guidelines are investigated for choosing between selection criteria in differing contexts.