Convergence of the nearest neighbor rule

  • Authors:
  • T. Wagner

  • Affiliations:
  • -

  • Venue:
  • IEEE Transactions on Information Theory
  • Year:
  • 2006

Quantified Score

Hi-index 754.84

Visualization

Abstract

If the nearest neighbor rule (NNR) is used to classify unknown samples, then Cover and Hart [1] have shown that the average probability of error usingnknown samples (denoted byR_n) converges to a numberRasntends to infinity, whereR^ {ast} leq R leq 2R^ {ast} (1 - R^ {ast}), andR^ {ast}is the Bayes probability of error. Here it is shown that when the samples lie inn-dimensional Euclidean space, the probability of error for the NNR conditioned on thenknown samples (denoted byL_n. so thatEL_n = R_n)converges toRwith probability 1 for mild continuity and moment assumptions on the class densities. Two estimates ofRfrom thenknown samples are shown to be consistent. Rates of convergence ofL_ntoRare also given.