The Geometry of Algorithms with Orthogonality Constraints
SIAM Journal on Matrix Analysis and Applications
The Effect of the Input Density Distribution on Kernel-based Classifiers
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Acquiring Linear Subspaces for Face Recognition under Variable Lighting
IEEE Transactions on Pattern Analysis and Machine Intelligence
Local Minima and Convergence in Low-Rank Semidefinite Programming
Mathematical Programming: Series A and B
Consistency of Trace Norm Minimization
The Journal of Machine Learning Research
Exact Matrix Completion via Convex Optimization
Foundations of Computational Mathematics
Decomposing background topics from keywords by principal component pursuit
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
Image tag refinement towards low-rank, content-tag prior and error sparsity
Proceedings of the international conference on Multimedia
A Singular Value Thresholding Algorithm for Matrix Completion
SIAM Journal on Optimization
Robust principal component analysis?
Journal of the ACM (JACM)
TILT: Transform Invariant Low-Rank Textures
International Journal of Computer Vision
Robust Recovery of Subspace Structures by Low-Rank Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.