COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Machine Learning
Machine Learning
Exact bootstrap k-nearest neighbor learners
Machine Learning
Rates of convergence of nearest neighbor estimation under arbitrary sampling
IEEE Transactions on Information Theory
Empirical comparison of resampling methods using genetic neural networks for a regression problem
HAIS'11 Proceedings of the 6th international conference on Hybrid artificial intelligent systems - Volume Part II
KES-AMSTA'11 Proceedings of the 5th KES international conference on Agent and multi-agent systems: technologies and applications
Empirical comparison of resampling methods using genetic fuzzy systems for a regression problem
IDEAL'11 Proceedings of the 12th international conference on Intelligent data engineering and automated learning
Analysis of a random forests model
The Journal of Machine Learning Research
On the mutual nearest neighbors estimate in regression
The Journal of Machine Learning Research
Hi-index | 0.00 |
Bagging is a simple way to combine estimates in order to improve their performance. This method, suggested by Breiman in 1996, proceeds by resampling from the original data set, constructing a predictor from each subsample, and decide by combining. By bagging an n-sample, the crude nearest neighbor regression estimate is turned into a consistent weighted nearest neighbor regression estimate, which is amenable to statistical analysis. Letting the resampling size kn grows appropriately with n, it is shown that this estimate may achieve optimal rate of convergence, independently from the fact that resampling is done with or without replacement. Since the estimate with the optimal rate of convergence depends on the unknown distribution of the observations, adaptation results by data-splitting are presented.