Information-theoretic metric learning
Proceedings of the 24th international conference on Machine learning
Optimal dimensionality of metric space for classification
Proceedings of the 24th international conference on Machine learning
The Journal of Machine Learning Research
Robust Face Recognition via Sparse Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Semi-supervised sparse metric learning using alternating linearization optimization
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
Learning similarity function for rare queries
Proceedings of the fourth ACM international conference on Web search and data mining
Fast neighborhood component analysis
Neurocomputing
Clustering tagged documents with labeled and unlabeled documents
Information Processing and Management: an International Journal
Hi-index | 0.00 |
This paper proposes an efficient sparse metric learning algorithm in high dimensional space via an l1-penalized log-determinant regularization. Compare to the most existing distance metric learning algorithms, the proposed algorithm exploits the sparsity nature underlying the intrinsic high dimensional feature space. This sparsity prior of learning distance metric serves to regularize the complexity of the distance model especially in the "less example number p and high dimension d" setting. Theoretically, by analogy to the covariance estimation problem, we find the proposed distance learning algorithm has a consistent result at rate O (√m2 log d)/n) to the target distance matrix with at most m nonzeros per row. Moreover, from the implementation perspective, this l1-penalized log-determinant formulation can be efficiently optimized in a block coordinate descent fashion which is much faster than the standard semi-definite programming which has been widely adopted in many other advanced distance learning algorithms. We compare this algorithm with other state-of-the-art ones on various datasets and competitive results are obtained.