Neural networks and the bias/variance dilemma
Neural Computation
Machine Learning
MultiBoosting: A Technique for Combining Boosting and Wagging
Machine Learning
Using Iterated Bagging to Debias Regressions
Machine Learning
Zone analysis: a visualization framework for classification problems
Artificial Intelligence Review
Two bagging algorithms with coupled learners to encourage diversity
IDA'07 Proceedings of the 7th international conference on Intelligent data analysis
An empirical study of multilayer perceptron ensembles for regression tasks
IEA/AIE'10 Proceedings of the 23rd international conference on Industrial engineering and other applications of applied intelligent systems - Volume Part II
Bagging gradient-boosted trees for high precision, low variance ranking models
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Tomographic considerations in ensemble bias/variance decomposition
MCS'10 Proceedings of the 9th international conference on Multiple Classifier Systems
Prediction of forest aboveground biomass: an exercise on avoiding overfitting
EvoApplications'13 Proceedings of the 16th European conference on Applications of Evolutionary Computation
Hi-index | 0.00 |
Gradient Boosting and bagging applied to regressors can reduce the error due to bias and variance respectively. Alternatively, Stochastic Gradient Boosting (SGB) and Iterated Bagging (IB) attempt to simultaneously reduce the contribution of both bias and variance to error. We provide an extensive empirical analysis of these methods, along with two alternate bias-variance reduction approaches — bagging Gradient Boosting (BagGB) and bagging Stochastic Gradient Boosting (BagSGB). Experimental results demonstrate that SGB does not perform as well as IB or the alternate approaches. Furthermore, results show that, while BagGB and BagSGB perform competitively for low-bias learners, in general, Iterated Bagging is the most effective of these methods.