Guaranteed classification via regularized similarity learning

  • Authors:
  • Zheng-Chu Guo;Yiming Ying

  • Affiliations:
  • -;-

  • Venue:
  • Neural Computation
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

Learning an appropriate dissimilarity function from the available data is a central problem in machine learning, since the success of many machine learning algorithms critically depends on the choice of a similarity function to compare examples. Despite many approaches to similarity metric learning that have been proposed, there has been little theoretical study on the links between similarity metric learning and the classification performance of the resulting classifier. In this letter, we propose a regularized similarity learning formulation associated with general matrix norms and establish their generalization bounds. We show that the generalization error of the resulting linear classifier can be bounded by the derived generalization bound of similarity learning. This shows that a good generalization of the learned similarity function guarantees a good classification of the resulting linear classifier. Our results extend and improve those obtained by Bellet, Habrard, and Sebban 2012. Due to the techniques dependent on the notion of uniform stability Bousquet & Elisseeff, 2002, the bound obtained there holds true only for the Frobenius matrix-norm regularization. Our techniques using the Rademacher complexity Bartlett & Mendelson, 2002 and its related Khinchin-type inequality enable us to establish bounds for regularized similarity learning formulations associated with general matrix norms, including sparse L1-norm and mixed 2,1-norm.