Locally determining the number of neighbors in the k-nearest neighbor rule based on statistical confidence

  • Authors:
  • Jigang Wang;Predrag Neskovic;Leon N. Cooper

  • Affiliations:
  • Institute for Brain and Neural Systems, Department of Physics, Brown University, Providence, RI;Institute for Brain and Neural Systems, Department of Physics, Brown University, Providence, RI;Institute for Brain and Neural Systems, Department of Physics, Brown University, Providence, RI

  • Venue:
  • ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part I
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The k-nearest neighbor rule is one of the most attractive pattern classification algorithms. In practice, the value of k is usually determined by the cross-validation method. In this work, we propose a new method that locally determines the number of nearest neighbors based on the concept of statistical confidence. We define the confidence associated with decisions that are made by the majority rule from a finite number of observations and use it as a criterion to determine the number of nearest neighbors needed. The new algorithm is tested on several real-world datasets and yields results comparable to those obtained by the k-nearest neighbor rule. In contrast to the k-nearest neighbor rule that uses a fixed number of nearest neighbors throughout the feature space, our method locally adjusts the number of neighbors until a satisfactory level of confidence is reached. In addition, the statistical confidence provides a natural way to balance the trade-off between the reject rate and the error rate by excluding patterns that have low confidence levels.