A novel prototype reduction method for the K-nearest neighbor algorithm with K≥1

  • Authors:
  • Tao Yang;Longbing Cao;Chengqi Zhang

  • Affiliations:
  • Faculty of Engineering and Information Technology, University of Technology, Sydney, Australia;Faculty of Engineering and Information Technology, University of Technology, Sydney, Australia;Faculty of Engineering and Information Technology, University of Technology, Sydney, Australia

  • Venue:
  • PAKDD'10 Proceedings of the 14th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining - Volume Part II
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, a novel prototype reduction algorithm is proposed, which aims at reducing the storage requirement and enhancing the online speed while retaining the same level of accuracy for a K-nearest neighbor (KNN) classifier To achieve this goal, our proposed algorithm learns the weighted similarity function for a KNN classifier by maximizing the leave-one-out cross-validation accuracy Unlike the classical methods PW, LPD and WDNN which can only work with K=1, our developed algorithm can work with K≥1 This flexibility allows our learning algorithm to have superior classification accuracy and noise robustness The proposed approach is assessed through experiments with twenty real world benchmark data sets In all these experiments, the proposed approach shows it can dramatically reduce the storage requirement and online time for KNN while having equal or better accuracy than KNN, and it also shows comparable results to several prototype reduction methods proposed in literature.