An efficient sparse metric learning in high-dimensional space via l1-penalized log-determinant regularization

  • Authors:
  • Guo-Jun Qi;Jinhui Tang;Zheng-Jun Zha;Tat-Seng Chua;Hong-Jiang Zhang

  • Affiliations:
  • University of Illinois at Urbana-Champaign, Urbana, IL;National University of Singapore, Singapore;National University of Singapore, Singapore;National University of Singapore, Singapore;Microsoft Advanced Technology Center, Beijing, China

  • Venue:
  • ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes an efficient sparse metric learning algorithm in high dimensional space via an l1-penalized log-determinant regularization. Compare to the most existing distance metric learning algorithms, the proposed algorithm exploits the sparsity nature underlying the intrinsic high dimensional feature space. This sparsity prior of learning distance metric serves to regularize the complexity of the distance model especially in the "less example number p and high dimension d" setting. Theoretically, by analogy to the covariance estimation problem, we find the proposed distance learning algorithm has a consistent result at rate O (√m2 log d)/n) to the target distance matrix with at most m nonzeros per row. Moreover, from the implementation perspective, this l1-penalized log-determinant formulation can be efficiently optimized in a block coordinate descent fashion which is much faster than the standard semi-definite programming which has been widely adopted in many other advanced distance learning algorithms. We compare this algorithm with other state-of-the-art ones on various datasets and competitive results are obtained.