A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Prediction games and arcing algorithms
Neural Computation
Machine Learning
Linear Programming Boosting via Column Generation
Machine Learning
Asymptotic behaviors of support vector machines with Gaussian kernel
Neural Computation
An introduction to boosting and leveraging
Advanced lectures on machine learning
Boosting as a Regularized Path to a Maximum Margin Classifier
The Journal of Machine Learning Research
Training ν-Support Vector Classifiers: Theory and Algorithms
Neural Computation
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Improving Transductive Support Vector Machine by Ensembling
AI '08 Proceedings of the 21st Australasian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
Large-Margin thresholded ensembles for ordinal regression: theory and practice
ALT'06 Proceedings of the 17th international conference on Algorithmic Learning Theory
Relevance vector machine based infinite decision agent ensemble learning for credit risk analysis
Expert Systems with Applications: An International Journal
Ensemble approaches for regression: A survey
ACM Computing Surveys (CSUR)
Hi-index | 0.00 |
Ensemble learning algorithms such as boosting can achieve better performance by averaging over the predictions of base hypotheses. However, existing algorithms are limited to combining only a finite number of hypotheses, and the generated ensemble is usually sparse. It is not clear whether we should construct an ensemble classifier with a larger or even infinite number of hypotheses. In addition, constructing an infinite ensemble itself is a challenging task. In this paper, we formulate an infinite ensemble learning framework based on SVM. The framework can output an infinite and nonsparse ensemble, and can be used to construct new kernels for SVM as well as to interpret some existing ones. We demonstrate the framework with a concrete application, the stump kernel, which embodies infinitely many decision stumps. The stump kernel is simple, yet powerful. Experimental results show that SVM with the stump kernel is usually superior than boosting, even with noisy data.