A new polynomial-time algorithm for linear programming
Combinatorica
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Machine Learning - Special issue on learning with probabilistic representations
A tutorial on learning with Bayesian networks
Learning in graphical models
On Bias, Variance, 0/1—Loss, and the Curse-of-Dimensionality
Data Mining and Knowledge Discovery
Bayesian Averaging of Classifiers and the Overfitting Problem
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Model Averaging for Prediction with Discrete Bayesian Networks
The Journal of Machine Learning Research
Not So Naive Bayes: Aggregating One-Dependence Estimators
Machine Learning
Decision making under uncertainty using imprecise probabilities
International Journal of Approximate Reasoning
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
IEEE Transactions on Knowledge and Data Engineering
Compression-Based Averaging of Selective Naive Bayes Classifiers
The Journal of Machine Learning Research
Learning Reliable Classifiers From Small or Incomplete Data Sets: The Naive Credal Classifier 2
The Journal of Machine Learning Research
Credal Model Averaging: An Extension of Bayesian Model Averaging to Imprecise Probabilities
ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
Limits of learning about a categorical latent variable under prior near-ignorance
International Journal of Approximate Reasoning
Upper entropy of credal sets. Applications to credal classification
International Journal of Approximate Reasoning
Learning Nondeterministic Classifiers
The Journal of Machine Learning Research
On supervised selection of Bayesian networks
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Robust bayesian linear classifier ensembles
ECML'05 Proceedings of the 16th European conference on Machine Learning
Evaluating credal classifiers by utility-discounted predictive accuracy
International Journal of Approximate Reasoning
Editorial: Special issue on imprecision in statistical data analysis
Computational Statistics & Data Analysis
Hi-index | 0.03 |
It is studied how to aggregate the probabilistic predictions generated by different SPODE (Super-Parent-One-Dependence Estimators) classifiers. It is shown that aggregating such predictions via compression-based weights achieves a slight but consistent improvement of performance over previously existing aggregation methods, including Bayesian Model Averaging and simple average (the approach adopted by the AODE algorithm). Then, attention is given to the problem of choosing the prior probability distribution over the models; this is an important issue in any Bayesian ensemble of models. To robustly deal with the choice of the prior, the single prior over the models is substituted by a set of priors over the models (credal set), thus obtaining a credal ensemble of Bayesian classifiers. The credal ensemble recognizes the prior-dependent instances, namely the instances whose most probable class varies when different prior over the models are considered. When faced with prior-dependent instances, the credal ensemble remains reliable by returning a set of classes rather than a single class. Two credal ensembles of SPODEs are developed; the first generalizes the Bayesian Model Averaging and the second the compression-based aggregation. Extensive experiments show that the novel ensembles compare favorably to traditional methods for aggregating SPODEs and also to previous credal classifiers.