A globally convergent version of the Polak-Ribière conjugate gradient method
Mathematical Programming: Series A and B
Jacobi--Davidson Style QR and QZ Algorithms for the Reduction of Matrix Pencils
SIAM Journal on Scientific Computing
Efficient svm training using low-rank kernel representations
The Journal of Machine Learning Research
Spectral Grouping Using the Nyström Method
IEEE Transactions on Pattern Analysis and Machine Intelligence
Convex Optimization
Local Minima and Convergence in Low-Rank Semidefinite Programming
Mathematical Programming: Series A and B
Fast maximum margin matrix factorization for collaborative prediction
ICML '05 Proceedings of the 22nd international conference on Machine learning
Cubic regularization of Newton method and its global performance
Mathematical Programming: Series A and B
Information-theoretic metric learning
Proceedings of the 24th international conference on Machine learning
Training SVM with indefinite kernels
Proceedings of the 25th international conference on Machine learning
Improved Nyström low-rank approximation and error analysis
Proceedings of the 25th international conference on Machine learning
Distance Metric Learning for Large Margin Nearest Neighbor Classification
The Journal of Machine Learning Research
Low-Rank Kernel Learning with Bregman Matrix Divergences
The Journal of Machine Learning Research
Regression on Fixed-Rank Positive Semidefinite Matrices: A Riemannian Approach
The Journal of Machine Learning Research
Low-Rank Optimization on the Cone of Positive Semidefinite Matrices
SIAM Journal on Optimization
Max-Min Distance Analysis by Using Sequential SDP Relaxation for Dimension Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning a distance metric by empirical loss minimization
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Manifold Regularized Discriminative Nonnegative Matrix Factorization With Fast Gradient Descent
IEEE Transactions on Image Processing
Hi-index | 0.01 |
Low rank matrix approximation is an attractive model in large scale machine learning problems, because it can not only reduce the memory and runtime complexity, but also provide a natural way to regularize parameters while preserving learning accuracy. In this paper, we address a special class of nonconvex quadratic matrix optimization problems, which require a low rank positive semidefinite solution. Despite their non-convexity, we exploit the structure of these problems to derive an efficient solver that converges to their local optima. Furthermore, we show that the proposed solution is capable of dramatically enhancing the efficiency and scalability of a variety of concrete problems, which are of significant interest to the machine learning community. These problems include the Top-k Eigenvalue problem, Distance learning and Kernel learning. Extensive experiments on UCI benchmarks have shown the effectiveness and efficiency of our proposed method.