A scalable algorithm for learning a mahalanobis distance metric

  • Authors:
  • Junae Kim;Chunhua Shen;Lei Wang

  • Affiliations:
  • The Australian National University, Canberra, ACT, Australia;The Australian National University, Canberra, ACT, Australia;The Australian National University, Canberra, ACT, Australia

  • Venue:
  • ACCV'09 Proceedings of the 9th Asian conference on Computer Vision - Volume Part III
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

A distance metric that can accurately reflect the intrinsic characteristics of data is critical for visual recognition tasks An effective solution to defining such a metric is to learn it from a set of training samples In this work, we propose a fast and scalable algorithm to learn a Mahalanobis distance By employing the principle of margin maximization to secure better generalization performances, this algorithm formulates the metric learning as a convex optimization problem with a positive semidefinite (psd) matrix variable Based on an important theorem that a psd matrix with trace of one can always be represented as a convex combination of multiple rank-one matrices, our algorithm employs a differentiable loss function and solves the above convex optimization with gradient descent methods This algorithm not only naturally maintains the psd requirement of the matrix variable that is essential for metric learning, but also significantly cuts down computational overhead, making it much more efficient with the increasing dimensions of feature vectors Experimental study on benchmark data sets indicates that, compared with the existing metric learning algorithms, our algorithm can achieve higher classification accuracy with much less computational load.