Machine Learning
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
An adaptive version of the boost by majority algorithm
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
MultiBoosting: A Technique for Combining Boosting and Wagging
Machine Learning
Model Combination in the Multiple-Data-Batches Scenario
ECML '97 Proceedings of the 9th European Conference on Machine Learning
Some Theoretical Aspects of Boosting in the Presence of Noisy Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
The Alternating Decision Tree Learning Algorithm
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Regularization by Early Stopping in Single Layer Perceptron Training
ICANN 96 Proceedings of the 1996 International Conference on Artificial Neural Networks
MadaBoost: A Modification of AdaBoost
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Winning the KDD99 classification cup: bagged boosting
ACM SIGKDD Explorations Newsletter
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Hi-index | 0.00 |
Wrappers have recently been used to obtain parameter optimizations for learning algorithms. In this paper we investigate the use of a wrapper for estimating the correct number of boosting ensembles in the presence of class noise. Contrary to the naive approach that would be quadratic in the number of boosting iterations, the incremental algorithm described is linear.Additionally, directly using the k-sized ensembles generated during k-fold cross-validation search for prediction usually results in further improvements in classification performance. This improvement can be attributed to the reduction of variance due to averaging k ensembles instead of using only one ensemble. Consequently, cross-validation in the way we use it here, termed wrapping, can be viewed as yet another ensemble learner similar in spirit to bagging but also somewhat related to stacking.