Non-Euclidean or non-metric measures can be informative

  • Authors:
  • Elżbieta Pękalska;Artsiom Harol;Robert P. W. Duin;Barbara Spillmann;Horst Bunke

  • Affiliations:
  • Faculty of Electrical Engineering, Mathematics and Computer Sciences, Delft University of Technology, The Netherlands;Faculty of Electrical Engineering, Mathematics and Computer Sciences, Delft University of Technology, The Netherlands;Faculty of Electrical Engineering, Mathematics and Computer Sciences, Delft University of Technology, The Netherlands;Institute of Computer Science and Applied Mathematics, University of Bern, Switzerland;Institute of Computer Science and Applied Mathematics, University of Bern, Switzerland

  • Venue:
  • SSPR'06/SPR'06 Proceedings of the 2006 joint IAPR international conference on Structural, Syntactic, and Statistical Pattern Recognition
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Statistical learning algorithms often rely on the Euclidean distance. In practice, non-Euclidean or non-metric dissimilarity measures may arise when contours, spectra or shapes are compared by edit distances or as a consequence of robust object matching [1,2]. It is an open issue whether such measures are advantageous for statistical learning or whether they should be constrained to obey the metric axioms. The k-nearest neighbor (NN) rule is widely applied to general dissimilarity data as the most natural approach. Alternative methods exist that embed such data into suitable representation spaces in which statistical classifiers are constructed [3]. In this paper, we investigate the relation between non-Euclidean aspects of dissimilarity data and the classification performance of the direct NN rule and some classifiers trained in representation spaces. This is evaluated on a parameterized family of edit distances, in which parameter values control the strength of non-Euclidean behavior. Our finding is that the discriminative power of this measure increases with increasing non-Euclidean and non-metric aspects until a certain optimum is reached. The conclusion is that statistical classifiers perform well and the optimal values of the parameters characterize a non-Euclidean and somewhat non-metric measure.