The application of AdaBoost for distributed, scalable and on-line learning
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
MultiBoosting: A Technique for Combining Boosting and Wagging
Machine Learning
The distributed boosting algorithm
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
A Brief Introduction to Boosting
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Distributed Pasting of Small Votes
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
Online Ensemble Learning: An Empirical Study
Machine Learning
Neural Computation
Adaptive mixtures of local experts
Neural Computation
Boosting and other ensemble methods
Neural Computation
Hi-index | 0.00 |
This paper presents a new boosting (arcing) algorithm called POCA, Parallel Online Continuous Arcing. Unlike traditional boosting algorithms (such as Arc-x4 and Adaboost), that construct ensembles by adding and training weak learners sequentially on a round-by-round basis, training in POCA is performed over an entire ensemble continuously and in parallel. Since members of the ensemble are not frozen after an initial learning period (as in traditional boosting) POCA is able to adapt rapidly to non-stationary environments, and because POCA does not require the explicit scoring of a fixed exemplar set, it can perform online learning of non-repeating data. We present results from experiments conducted using neural network experts that show POCA is typically faster and more adaptive than existing boosting algorithms. Results presented for the UCI letter dataset are, to our knowledge, the best published scores to date.