Practical methods of optimization; (2nd ed.)
Practical methods of optimization; (2nd ed.)
Learning with matrix factorizations
Learning with matrix factorizations
Fast Monte Carlo Algorithms for Matrices II: Computing a Low-Rank Approximation to a Matrix
SIAM Journal on Computing
Fixed-Point Continuation for $\ell_1$-Minimization: Methodology and Convergence
SIAM Journal on Optimization
Exact Matrix Completion via Convex Optimization
Foundations of Computational Mathematics
Fixed point and Bregman iterative methods for matrix rank minimization
Mathematical Programming: Series A and B
On the convergence of the block nonlinear Gauss-Seidel method under convex constraints
Operations Research Letters
A modified parallel optimization system for updating large-size time-evolving flow matrix
Information Sciences: an International Journal
Hi-index | 0.00 |
We present several first-order algorithms for solving the low-rank matrix completion problem and the tightest convex relaxation of it obtained by minimizing the nuclear norm instead of the rank of the matrix. Our first algorithm is a fixed point continuation algorithm that incorporates an approximate singular value decomposition procedure (FPCA). FPCA can solve large matrix completion problems efficiently and attains high rates of recoverability. For example, FPCA can recover 1000 by 1000 matrices of rank 50 with a relative error of 10-5 in about 3 minutes by sampling only 20% of the elements. We know of no other method that achieves as good recoverability. Our second algorithm is a row by row method for solving a semidefinite programming reformulation of the nuclear norm matrix completion problem. This method produces highly accurate solutions to fairly large nuclear norm matrix completion problems efficiently. Finally, we introduce an alternating direction approach based on the augmented Lagrangian framework.