A combinatorial, primal-dual approach to semidefinite programs
Proceedings of the thirty-ninth annual ACM symposium on Theory of computing
Information-theoretic metric learning
Proceedings of the 24th international conference on Machine learning
Online kernel PCA with entropic matrix updates
Proceedings of the 24th international conference on Machine learning
Proceedings of the 24th international conference on Machine learning
Learning averages over the lie group of symmetric positive-definite matrices
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
An algorithm to compute averages on matrix Lie groups
IEEE Transactions on Signal Processing
When is there a free matrix lunch?
COLT'07 Proceedings of the 20th annual conference on Learning theory
Expert Systems with Applications: An International Journal
Fast SDP algorithms for constraint satisfaction problems
SODA '10 Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms
Riemannian Metric and Geometric Mean for Positive Semidefinite Matrices of Fixed Rank
SIAM Journal on Matrix Analysis and Applications
Linear Algorithms for Online Multitask Classification
The Journal of Machine Learning Research
Regression on Fixed-Rank Positive Semidefinite Matrices: A Riemannian Approach
The Journal of Machine Learning Research
Semi-Supervised Learning with Measure Propagation
The Journal of Machine Learning Research
Graph based semi-supervised learning with sharper edges
ECML'06 Proceedings of the 17th European conference on Machine Learning
Online tracking of linear subspaces
COLT'06 Proceedings of the 19th annual conference on Learning Theory
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Skew jensen-bregman voronoi diagrams
Transactions on Computational Science XIV
Metric and kernel learning using a linear transformation
The Journal of Machine Learning Research
Positive semidefinite metric learning using boosting-like algorithms
The Journal of Machine Learning Research
A matrix hyperbolic cosine algorithm and applications
ICALP'12 Proceedings of the 39th international colloquium conference on Automata, Languages, and Programming - Volume Part I
A robust and efficient doubly regularized metric learning approach
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
Kernelization of matrix updates, when and how?
ALT'12 Proceedings of the 23rd international conference on Algorithmic Learning Theory
Hi-index | 0.01 |
We address the problem of learning a symmetric positive definite matrix. The central issue is to design parameter updates that preserve positive definiteness. Our updates are motivated with the von Neumann divergence. Rather than treating the most general case, we focus on two key applications that exemplify our methods: on-line learning with a simple square loss, and finding a symmetric positive definite matrix subject to linear constraints. The updates generalize the exponentiated gradient (EG) update and AdaBoost, respectively: the parameter is now a symmetric positive definite matrix of trace one instead of a probability vector (which in this context is a diagonal positive definite matrix with trace one). The generalized updates use matrix logarithms and exponentials to preserve positive definiteness. Most importantly, we show how the derivation and the analyses of the original EG update and AdaBoost generalize to the non-diagonal case. We apply the resulting matrix exponentiated gradient (MEG) update and DefiniteBoost to the problem of learning a kernel matrix from distance measurements.