Communications of the ACM
Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Using Diversity with Three Variants of Boosting: Aggressive, Conservative, and Inverse
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Semi-supervised Robust Alternating AdaBoost
CIARP '09 Proceedings of the 14th Iberoamerican Conference on Pattern Recognition: Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications
Hi-index | 0.00 |
Ensemble methods are general techniques to improve the accuracy of any given learning algorithm. Boosting is a learning algorithm that builds the classifier ensembles incrementally. In this work we propose an improvement of the classical and inverse AdaBoost algorithms to deal with the problem of the presence of outliers in the data. We propose the Robust Alternating AdaBoost (RADA) algorithm that alternates between the classic and inverse AdaBoost to create a more stable algorithm. The RADA algorithm bounds the influence of the outliers to the empirical distribution, it detects and diminishes the empirical probability of "bad" samples, and it performs a more accurate classification under contaminated data. We report the performance results using synthetic and real datasets, the latter obtained from a benchmark site.