Neural networks and the bias/variance dilemma
Neural Computation
The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Support vector regression with ANOVA decomposition kernels
Advances in kernel methods
An Efficient Method To Estimate Bagging‘s Generalization Error
Machine Learning
Shrinking the tube: a new support vector regression algorithm
Proceedings of the 1998 conference on Advances in neural information processing systems II
On Bias, Variance, 0/1—Loss, and the Curse-of-Dimensionality
Data Mining and Knowledge Discovery
Improving Regressors using Boosting Techniques
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Looking for lumps: boosting and bagging for density estimation
Computational Statistics & Data Analysis - Nonlinear methods and data mining
Local averaging of heterogeneous regression models
International Journal of Hybrid Intelligent Systems
Cross-validated bagged learning
Journal of Multivariate Analysis
Improving analogy-based software cost estimation by a resampling method
Information and Software Technology
Using Boosting to prune Double-Bagging ensembles
Computational Statistics & Data Analysis
Boosting and instability for regression trees
Computational Statistics & Data Analysis
A Multi-agent System to Assist with Real Estate Appraisals Using Bagging Ensembles
ICCCI '09 Proceedings of the 1st International Conference on Computational Collective Intelligence. Semantic Web, Social Networks and Multiagent Systems
Two bagging algorithms with coupled learners to encourage diversity
IDA'07 Proceedings of the 7th international conference on Intelligent data analysis
An empirical study of multilayer perceptron ensembles for regression tasks
IEA/AIE'10 Proceedings of the 23rd international conference on Industrial engineering and other applications of applied intelligent systems - Volume Part II
Bagging gradient-boosted trees for high precision, low variance ranking models
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Using ensembles of regression trees to monitor lubricating oil quality
IEA/AIE'11 Proceedings of the 24th international conference on Industrial engineering and other applications of applied intelligent systems conference on Modern approaches in applied intelligence - Volume Part I
Common scab detection on potatoes using an infrared hyperspectral imaging system
ICIAP'11 Proceedings of the 16th international conference on Image analysis and processing - Volume Part II
Non-destructive detection of hollow heart in potatoes using hyperspectral imaging
CAIP'11 Proceedings of the 14th international conference on Computer analysis of images and patterns - Volume Part II
Content based image retrieval using a bootstrapped SOM network
ISNN'06 Proceedings of the Third international conference on Advnaces in Neural Networks - Volume Part II
Combining bias and variance reduction techniques for regression trees
ECML'05 Proceedings of the 16th European conference on Machine Learning
Ensemble approaches for regression: A survey
ACM Computing Surveys (CSUR)
Tree ensembles for predicting structured outputs
Pattern Recognition
Hi-index | 0.00 |
Breiman (Machine Learning, 26(2), 123–140) showed that bagging could effectively reduce the variance of regression predictors, while leaving the bias relatively unchanged. A new form of bagging we call iterated bagging is effective in reducing both bias and variance. The procedure works in stages—the first stage is bagging. Based on the outcomes of the first stage, the output values are altered; and a second stage of bagging is carried out using the altered output values. This is repeated until a simple rule stops the process. The method is tested using both trees and nearest neighbor regression methods. Accuracy on the Boston Housing data benchmark is comparable to the best of the results gotten using highly tuned and compute- intensive Support Vector Regression Machines. Some heuristic theory is given to clarify what is going on. Application to two-class classification data gives interesting results.