Evolving, training and designing neural network ensembles
INES'10 Proceedings of the 14th international conference on Intelligent engineering systems
Sparse ensembles using weighted combination methods based on linear programming
Pattern Recognition
A new metric for greedy ensemble pruning
AICI'11 Proceedings of the Third international conference on Artificial intelligence and computational intelligence - Volume Part II
Margin distribution based bagging pruning
Neurocomputing
AI'11 Proceedings of the 24th international conference on Advances in Artificial Intelligence
Energy-Based metric for ensemble selection
APWeb'12 Proceedings of the 14th Asia-Pacific international conference on Web Technologies and Applications
Granular fuzzy models: a study in knowledge management in fuzzy modeling
International Journal of Approximate Reasoning
Diversity regularized ensemble pruning
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
Classifier ensemble using a heuristic learning with sparsity and diversity
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part II
Affinity-driven blog cascade analysis and prediction
Data Mining and Knowledge Discovery
Hi-index | 0.00 |
An ensemble is a group of learners that work together as a committee to solve a problem. The existing ensemble learning algorithms often generate unnecessarily large ensembles, which consume extra computational resource and may degrade the generalization performance. Ensemble pruning algorithms aim to find a good subset of ensemble members to constitute a small ensemble, which saves the computational resource and performs as well as, or better than, the unpruned ensemble. This paper introduces a probabilistic ensemble pruning algorithm by choosing a set of “sparse” combination weights, most of which are zeros, to prune the ensemble. In order to obtain the set of sparse combination weights and satisfy the nonnegative constraint of the combination weights, a left-truncated, nonnegative, Gaussian prior is adopted over every combination weight. Expectation propagation (EP) algorithm is employed to approximate the posterior estimation of the weight vector. The leave-one-out (LOO) error can be obtained as a by-product in the training of EP without extra computation and is a good indication for the generalization error. Therefore, the LOO error is used together with the Bayesian evidence for model selection in this algorithm. An empirical study on several regression and classification benchmark data sets shows that our algorithm utilizes far less component learners but performs as well as, or better than, the unpruned ensemble. Our results are very competitive compared with other ensemble pruning algorithms.