Identifying predictive hubs to condense the training set of $$k$$-nearest neighbour classifiers

  • Authors:
  • Ludwig Lausser;Christoph Müssel;Alexander Melkozerov;Hans A. Kestler

  • Affiliations:
  • Research Group Bioinformatics and Systems Biology, Institute of Neural Information Processing, University of Ulm, Ulm, Germany 89069;Research Group Bioinformatics and Systems Biology, Institute of Neural Information Processing, University of Ulm, Ulm, Germany 89069;Department of Television and Control, Tomsk State University of Control Systems and Radioelectronics, Tomsk, Russia 634050;Research Group Bioinformatics and Systems Biology, Institute of Neural Information Processing, University of Ulm, Ulm, Germany 89069

  • Venue:
  • Computational Statistics
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

The $$k$$-Nearest Neighbour classifier is widely used and popular due to its inherent simplicity and the avoidance of model assumptions. Although the approach has been shown to yield a near-optimal classification performance for an infinite number of samples, a selection of the most decisive data points can improve the classification accuracy considerably in real settings with a limited number of samples. At the same time, a selection of a subset of representative training samples reduces the required amount of storage and computational resources. We devised a new approach that selects a representative training subset on the basis of an evolutionary optimization procedure. This method chooses those training samples that have a strong influence on the correct prediction of other training samples, in particular those that have uncertain labels. The performance of the algorithm is evaluated on different data sets. Additionally, we provide graphical examples of the selection procedure.