The Journal of Machine Learning Research
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Learning a Mahalanobis Metric from Equivalence Constraints
The Journal of Machine Learning Research
Protein homology detection using string alignment kernels
Bioinformatics
SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
Neural Computation
On a theory of learning with similarity functions
ICML '06 Proceedings of the 23rd international conference on Machine learning
Learning Distance Metrics with Contextual Constraints for Image Retrieval
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Information-theoretic metric learning
Proceedings of the 24th international conference on Machine learning
Learning Similarity with Operator-valued Large-margin Classifiers
The Journal of Machine Learning Research
More generality in efficient multiple kernel learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Similarity-based Classification: Concepts and Algorithms
The Journal of Machine Learning Research
Large Scale Online Learning of Image Similarity Through Ranking
The Journal of Machine Learning Research
Regularization techniques for learning with matrices
The Journal of Machine Learning Research
Full length article: Regularization networks with indefinite kernels
Journal of Approximation Theory
Hi-index | 0.00 |
Learning an appropriate dissimilarity function from the available data is a central problem in machine learning, since the success of many machine learning algorithms critically depends on the choice of a similarity function to compare examples. Despite many approaches to similarity metric learning that have been proposed, there has been little theoretical study on the links between similarity metric learning and the classification performance of the resulting classifier. In this letter, we propose a regularized similarity learning formulation associated with general matrix norms and establish their generalization bounds. We show that the generalization error of the resulting linear classifier can be bounded by the derived generalization bound of similarity learning. This shows that a good generalization of the learned similarity function guarantees a good classification of the resulting linear classifier. Our results extend and improve those obtained by Bellet, Habrard, and Sebban 2012. Due to the techniques dependent on the notion of uniform stability Bousquet & Elisseeff, 2002, the bound obtained there holds true only for the Frobenius matrix-norm regularization. Our techniques using the Rademacher complexity Bartlett & Mendelson, 2002 and its related Khinchin-type inequality enable us to establish bounds for regularized similarity learning formulations associated with general matrix norms, including sparse L1-norm and mixed 2,1-norm.