Robust regression and outlier detection
Robust regression and outlier detection
Machine Learning
Error reduction through learning multiple descriptions
Machine Learning
Boosting in the limit: maximizing the margin of learned ensembles
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Further results on the margin distribution
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
An Efficient Method To Estimate Bagging‘s Generalization Error
Machine Learning
Ensembling neural networks: many could be better than all
Artificial Intelligence
On the Boosting Pruning Problem
ECML '00 Proceedings of the 11th European Conference on Machine Learning
On generalization bounds, projection profile, and margin distribution
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Boosting the Margin Distribution
IDEAL '00 Proceedings of the Second International Conference on Intelligent Data Engineering and Automated Learning, Data Mining, Financial Engineering, and Intelligent Agents
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Pruning in ordered bagging ensembles
ICML '06 Proceedings of the 23rd international conference on Machine learning
How boosting the margin can also boost classifier complexity
ICML '06 Proceedings of the 23rd international conference on Machine learning
Using boosting to prune bagging ensembles
Pattern Recognition Letters
Ensemble Pruning Via Semi-definite Programming
The Journal of Machine Learning Research
A Theoretical Analysis of Bagging as a Linear Combination of Classifiers
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Kernel Method for the Optimization of the Margin Distribution
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
An Analysis of Ensemble Pruning Techniques Based on Ordered Aggregation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Predictive Ensemble Pruning by Expectation Propagation
IEEE Transactions on Knowledge and Data Engineering
Selection of decision stumps in bagging ensembles
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Boosting through optimization of margin distributions
IEEE Transactions on Neural Networks
Sparse ensembles using weighted combination methods based on linear programming
Pattern Recognition
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Making use of population information in evolutionary artificialneural networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.01 |
Bagging is a simple and effective technique for generating an ensemble of classifiers. It is found there are a lot of redundant base classifiers in the original Bagging. We design a pruning approach to bagging for improving its generalization power. The proposed technique introduces the margin distribution based classification loss as the optimization objective and minimizes the loss on training samples, which leads to an optimal margin distribution. Meanwhile, in order to derive a sparse ensemble, l"1 regularization is introduced to control the size of ensembles. By this way, we can obtain a sparse weight vector of base classifiers. Then we rank the base classifiers with respect to their weights and combine the base classifiers with large weights. We call this technique MArgin Distribution base Bagging pruning (MAD-Bagging). Simple voting and weighted voting are tried to combine the outputs of selected base classifiers. The performance of this pruned ensemble is evaluated with several UCI benchmark tasks, where base classifiers are trained with SVM, CART, and the nearest neighbor (1NN) rule, respectively. The results show that margin distribution based CART pruned Bagging can significantly improve classification accuracies. However, SVM and 1NN pruned Bagging improve little compared with single classifiers.