A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Sparse Regression Ensembles in Infinite and Finite Hypothesis Spaces
Machine Learning
Improving Regressors using Boosting Techniques
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Hi-index | 0.00 |
A newsequen tial method for the regression problems is studied. The suggested method is motivated by boosting methods in the classification problems. Boosting algorithms use the weighted data to update the estimator. In this paper we construct a sequential estimation method from the viewpoint of nonparametric estimation by using mixture distribution. The algorithm uses the weighted residuals of training data. We compare the suggested algorithm to the greedy algorithm by the simple simulation.