Large Scale Online Learning of Image Similarity Through Ranking
The Journal of Machine Learning Research
Scalable large-margin Mahalanobis distance metric learning
IEEE Transactions on Neural Networks
Regression on Fixed-Rank Positive Semidefinite Matrices: A Riemannian Approach
The Journal of Machine Learning Research
Learning low-rank kernel matrices for constrained clustering
Neurocomputing
A Family of Simple Non-Parametric Kernel Learning Algorithms
The Journal of Machine Learning Research
Learning from pairwise constraints by Similarity Neural Networks
Neural Networks
Online learning in the embedded manifold of low-rank matrices
The Journal of Machine Learning Research
Metric and kernel learning using a linear transformation
The Journal of Machine Learning Research
Sparse coding and dictionary learning for symmetric positive definite matrices: a kernel approach
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part II
Mirror descent for metric learning: a unified approach
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
Online Multiple Kernel Classification
Machine Learning
Low-rank quadratic semidefinite programming
Neurocomputing
Unsupervised non-parametric kernel learning algorithm
Knowledge-Based Systems
Geometry preserving multi-task metric learning
Machine Learning
Efficient kernel learning from side information using ADMM
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
In this paper, we study low-rank matrix nearness problems, with a focus on learning low-rank positive semidefinite (kernel) matrices for machine learning applications. We propose efficient algorithms that scale linearly in the number of data points and quadratically in the rank of the input matrix. Existing algorithms for learning kernel matrices often scale poorly, with running times that are cubic in the number of data points. We employ Bregman matrix divergences as the measures of nearness---these divergences are natural for learning low-rank kernels since they preserve rank as well as positive semidefiniteness. Special cases of our framework yield faster algorithms for various existing learning problems, and experimental results demonstrate that our algorithms can effectively learn both low-rank and full-rank kernel matrices.