Nearest Local Hyperplane Rules for Pattern Classification

  • Authors:
  • Gábor Takács;Béla Pataki

  • Affiliations:
  • Budapest University of Technology and Economics, Department of Measurement and Information Systems, Magyar Tudósok körútja 2., 1117 Budapest, Hungary;Budapest University of Technology and Economics, Department of Measurement and Information Systems, Magyar Tudósok körútja 2., 1117 Budapest, Hungary

  • Venue:
  • AI*IA '07 Proceedings of the 10th Congress of the Italian Association for Artificial Intelligence on AI*IA 2007: Artificial Intelligence and Human-Oriented Computing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Predicting the class of an observation from its nearest neighbors is one of the earliest approaches in pattern recognition. In addition to their simplicity, nearest neighbor rules have appealing theoretical properties, e.g. the asymptotic error probability of the plain 1-nearest-neighbor (NN) rule is at most twice the Bayes bound, which means zero asymptotic risk in the separable case. But given only a finite number of training examples, NN classifiers are often outperformed in practice. A possible modification of the NN rule to handle separable problems better is the nearest local hyperplane (NLH) approach. In this paper we introduce a new way of NLH classification that has two advantages over the original NLH algorithm. First, our method preserves the zero asymptotic risk property of NN classifiers in the separable case. Second, it usually provides better finite sample performance.