Handwritten digit recognition with a back-propagation network
Advances in neural information processing systems 2
Exponentiated gradient versus gradient descent for linear predictors
Information and Computation
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
On-line learning and stochastic approximations
On-line learning in neural networks
The Geometry of Algorithms with Orthogonality Constraints
SIAM Journal on Matrix Analysis and Applications
Improved Boosting Algorithms Using Confidence-rated Predictions
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Efficient svm training using low-rank kernel representations
The Journal of Machine Learning Research
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Learning a kernel matrix for nonlinear dimensionality reduction
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Online and batch learning of pseudo-metrics
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection
The Journal of Machine Learning Research
Predictive low-rank decomposition for kernel methods
ICML '05 Proceedings of the 22nd international conference on Machine learning
Information-theoretic metric learning
Proceedings of the 24th international conference on Machine learning
Proceedings of the 24th international conference on Machine learning
Structured metric learning for high dimensional problems
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Efficient Kernel Discriminant Analysis via Spectral Regression
ICDM '07 Proceedings of the 2007 Seventh IEEE International Conference on Data Mining
SimpleNPKL: simple non-parametric kernel learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Optimization Algorithms on Matrix Manifolds
Optimization Algorithms on Matrix Manifolds
Distance Metric Learning for Large Margin Nearest Neighbor Classification
The Journal of Machine Learning Research
Low-Rank Kernel Learning with Bregman Matrix Divergences
The Journal of Machine Learning Research
Riemannian Metric and Geometric Mean for Positive Semidefinite Matrices of Fixed Rank
SIAM Journal on Matrix Analysis and Applications
Low-Rank Optimization on the Cone of Positive Semidefinite Matrices
SIAM Journal on Optimization
Online tracking of linear subspaces
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Covariance, subspace, and intrinsic Crame´r-Rao bounds
IEEE Transactions on Signal Processing
Global convergence of Oja's subspace algorithm for principal component extraction
IEEE Transactions on Neural Networks
Online learning in the embedded manifold of low-rank matrices
The Journal of Machine Learning Research
Low-rank quadratic semidefinite programming
Neurocomputing
Hi-index | 0.00 |
The paper addresses the problem of learning a regression model parameterized by a fixed-rank positive semidefinite matrix. The focus is on the nonlinear nature of the search space and on scalability to high-dimensional problems. The mathematical developments rely on the theory of gradient descent algorithms adapted to the Riemannian geometry that underlies the set of fixed-rank positive semidefinite matrices. In contrast with previous contributions in the literature, no restrictions are imposed on the range space of the learned matrix. The resulting algorithms maintain a linear complexity in the problem size and enjoy important invariance properties. We apply the proposed algorithms to the problem of learning a distance function parameterized by a positive semidefinite matrix. Good performance is observed on classical benchmarks.