Communications of the ACM
The design and analysis of efficient learning algorithms
The design and analysis of efficient learning algorithms
Cryptographic limitations on learning Boolean formulae and finite automata
Journal of the ACM (JACM)
An introduction to computational learning theory
An introduction to computational learning theory
Boosting a weak learning algorithm by majority
Information and Computation
On efficient agnostic learning of linear combinations of basis functions
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
An adaptive version of the boost by majority algorithm
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Prediction games and arcing algorithms
Neural Computation
Improved Boosting Algorithms Using Confidence-rated Predictions
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Machine Learning
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Improving Regressors using Boosting Techniques
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
A Boosting Algorithm for Regression
ICANN '97 Proceedings of the 7th International Conference on Artificial Neural Networks
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Local averaging of heterogeneous regression models
International Journal of Hybrid Intelligent Systems
High-Performance Rotation Invariant Multiview Face Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
AdaRank: a boosting algorithm for information retrieval
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Boosting over Groups and Its Application to Acronym-Expansion Extraction
ADMA '08 Proceedings of the 4th international conference on Advanced Data Mining and Applications
AdaSum: an adaptive model for summarization
Proceedings of the 17th ACM conference on Information and knowledge management
Using Boosting to prune Double-Bagging ensembles
Computational Statistics & Data Analysis
Bagging different instead of similar models for regression and classification problems
International Journal of Computer Applications in Technology
Boosting based conditional quantile estimation for regression and binary classification
MICAI'10 Proceedings of the 9th Mexican international conference on Artificial intelligence conference on Advances in soft computing: Part II
Study of the behavior of a new boosting algorithm for recurrent neural networks
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
QBoost: Predicting quantiles with boosting for regression and binary classification
Expert Systems with Applications: An International Journal
Predicting chaotic time series by boosted recurrent neural networks
ICONIP'06 Proceedings of the 13th international conference on Neural Information Processing - Volume Part II
Local additive regression of decision stumps
SETN'06 Proceedings of the 4th Helenic conference on Advances in Artificial Intelligence
Ensemble approaches for regression: A survey
ACM Computing Surveys (CSUR)
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advances in computational intelligence - Volume Part I
On a method for constructing ensembles of regression models
Automation and Remote Control
Face Alignment by Explicit Shape Regression
International Journal of Computer Vision
Hi-index | 0.00 |
In this paper we examine ensemble methods for regression that leverage or “boost” base regressors by iteratively calling them on modified samples. The most successful leveraging algorithm for classification is AdaBoost, an algorithm that requires only modest assumptions on the base learning method for its strong theoretical guarantees. We present several gradient descent leveraging algorithms for regression and prove AdaBoost-style bounds on their sample errors using intuitive assumptions on the base learners. We bound the complexity of the regression functions produced in order to derive PAC-style bounds on their generalization errors. Experiments validate our theoretical results.