Machine Learning
Shape quantization and recognition with randomized trees
Neural Computation
Machine Learning
Random Structures & Algorithms
Consistency of Random Forests and Other Averaging Classifiers
The Journal of Machine Learning Research
Exact bootstrap k-nearest neighbor learners
Machine Learning
Nearest neighbor pattern classification
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Maxima-finding algorithms for multidimensional samples: A two-phase approach
Computational Geometry: Theory and Applications
Random forests for metric learning with implicit pairwise position dependence
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Analysis of a random forests model
The Journal of Machine Learning Research
An affine invariant k-nearest neighbor regression estimate
Journal of Multivariate Analysis
On the mutual nearest neighbors estimate in regression
The Journal of Machine Learning Research
Hi-index | 0.00 |
Let X"1,...,X"n be identically distributed random vectors in R^d, independently drawn according to some probability density. An observation X"i is said to be a layered nearest neighbour (LNN) of a point x if the hyperrectangle defined by x and X"i contains no other data points. We first establish consistency results on L"n(x), the number of LNN of x. Then, given a sample (X,Y),(X"1,Y"1),...,(X"n,Y"n) of independent identically distributed random vectors from R^dxR, one may estimate the regression function r(x)=E[Y|X=x] by the LNN estimate r"n(x), defined as an average over the Y"i's corresponding to those X"i which are LNN of x. Under mild conditions on r, we establish the consistency of E|r"n(x)-r(x)|^p towards 0 as n-~, for almost all x and all p=1, and discuss the links between r"n and the random forest estimates of Breiman (2001) [8]. We finally show the universal consistency of the bagged (bootstrap-aggregated) nearest neighbour method for regression and classification.