Adjustment Learning and Relevant Component Analysis
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part IV
Information-theoretic metric learning
Proceedings of the 24th international conference on Machine learning
Robust distance metric learning with auxiliary knowledge
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Max-Min Distance Analysis by Using Sequential SDP Relaxation for Dimension Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Generalization bounds for subspace selection and hyperbolic PCA
SLSFS'05 Proceedings of the 2005 international conference on Subspace, Latent Structure and Feature Selection
Integrating Spectral Kernel Learning and Constraints in Semi-Supervised Classification
Neural Processing Letters
Extracting elite pairwise constraints for clustering
Neurocomputing
Low-rank quadratic semidefinite programming
Neurocomputing
Hi-index | 0.00 |
In this paper, we study the problem of learning a metric and propose a loss function based metric learning framework, in which the metric is estimated by minimizing an empirical risk over a training set. With mild conditions on the instance distribution and the used loss function, we prove that the empirical risk converges to its expected counterpart at rate of root-n. In addition, with the assumption that the best metric that minimizes the expected risk is bounded, we prove that the learned metric is consistent. Two example algorithms are presented by using the proposed loss function based metric learning framework, each of which uses a log loss function and a smoothed hinge loss function, respectively. Experimental results suggest the effectiveness of the proposed algorithms.