The Strength of Weak Learnability
Machine Learning
Boosting a weak learning algorithm by majority
Information and Computation
Machine Learning
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Improving Regressors using Boosting Techniques
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
The Case against Accuracy Estimation for Comparing Induction Algorithms
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Applying Boosting Techniques to Genetic Programming
Selected Papers from the 5th European Conference on Artificial Evolution
Impact Studies and Sensitivity Analysis in Medical Data Mining with ROC-based Genetic Learning
ICDM '03 Proceedings of the Third IEEE International Conference on Data Mining
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
Hi-index | 0.00 |
The so-called "boosting" principle was introduced by Schapire and Freund in the 1990s in relation to weak learners in the Probably Approximately Correct computational learning framework. Another practice that has developed in recent years consists in assessing the quality of evolutionary or genetic classifiers with Receiver Operating Characteristics (ROC) curves. Following the RankBoost algorithm by Freund et al., this article is a cross-bridge between these two techniques, and deals about boosting ROC-based genetic programming classifiers. Updating the weights after a boosting round turns to be the algorithm keystone since the ROC curve does not allow to know directly which training cases are learned or misclassified. We propose a geometrical interpretation of the ROC curve to attribute an error measure to every training case. We validate our ROCboost algorithm on several benchmarks from the UCI-Irvine repository, and we compare boosted Genetic Programming performance with published results on ROC-based Evolution Strategies and Support Vector Machines.