Similarity metric learning for a variable-kernel classifier

  • Authors:
  • David G. Lowe

  • Affiliations:
  • -

  • Venue:
  • Neural Computation
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

Nearest-neighbor interpolation algorithms have many usefulproperties for applications to learning, but they often exhibitpoor generalization. In this paper, it is shown that much bettergeneralization can be obtained by using a variable interpolationkernel in combination with conjugate gradient optimization of thesimilarity metric and kernel size. The resulting method is calledvariable-kernel similarity metric (VSM) learning. It has beentested on several standard classification data sets, and on theseproblems it shows better generalization than backpropagation andmost other learning methods. The number of parameters that must bedetermined through optimization are orders of magnitude less thanfor backpropagation or radial basis function (RBF) networks, whichmay indicate that the method better captures the essential degreesof variation in learning. Other features of VSM learning arediscussed that make it relevant to models for biological learningin the brain.