Improving nearest neighbor classification with cam weighted distance
Pattern Recognition
Computational Geometry: Theory and Applications
A novel self-organizing map (SOM) neural network for discrete groups of data clustering
Applied Soft Computing
Nearest neighbor classification using cam weighted distance
FSKD'05 Proceedings of the Second international conference on Fuzzy Systems and Knowledge Discovery - Volume Part II
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part I
Condensed nearest neighbor data domain description
IDA'05 Proceedings of the 6th international conference on Advances in Intelligent Data Analysis
Hubness-Aware shared neighbor distances for high-dimensional k-nearest neighbor classification
HAIS'12 Proceedings of the 7th international conference on Hybrid Artificial Intelligent Systems - Volume Part II
MLDM'12 Proceedings of the 8th international conference on Machine Learning and Data Mining in Pattern Recognition
Efficient distributed data condensation for nearest neighbor classification
Euro-Par'07 Proceedings of the 13th international Euro-Par conference on Parallel Processing
Hi-index | 0.15 |
When (X1, 驴1),..., (Xn, 驴n) are independent identically distributed random vectors from IRd X {0, 1} distributed as (X, 驴), and when 驴 is estimated by its nearest neighbor estimate 驴(1), then Cover and Hart have shown that P{驴(1) 驴 驴}n 驴 驴 驴 2E {驴 (X) (1 - 驴(X))} 驴 2R*(1 - R*) where R* is the Bayes probability of error and 驴(x) = P{驴 = 1 | X = x}. They have conditions on the distribution of (X, 驴). We give two proofs, one due to Stone and a short original one, of the same result for all distributions of (X, 驴). If ties are carefully taken care of, we also show that P{驴(1) 驴 驴|X1, 驴1, ..., Xn, 驴n} converges in probability to a constant for all distributions of (X, 驴), thereby strengthening results of Wagner and Fritz.