Solving low-rank matrix completion problems efficiently
Allerton'09 Proceedings of the 47th annual Allerton conference on Communication, control, and computing
Random channel coding and blind deconvolution
Allerton'09 Proceedings of the 47th annual Allerton conference on Communication, control, and computing
Spatiotemporal imaging with partially separable functions: a matrix recovery approach
ISBI'10 Proceedings of the 2010 IEEE international conference on Biomedical imaging: from nano to Macro
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
The Journal of Machine Learning Research
Element-wise factorization for N-View projective reconstruction
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part IV
Interior-Point Method for Nuclear Norm Approximation with Application to System Identification
SIAM Journal on Matrix Analysis and Applications
Fast optimization for mixture prior models
ECCV'10 Proceedings of the 11th European conference on computer vision conference on Computer vision: Part III
A Singular Value Thresholding Algorithm for Matrix Completion
SIAM Journal on Optimization
Robust principal component analysis?
Journal of the ACM (JACM)
Composite splitting algorithms for convex optimization
Computer Vision and Image Understanding
Analysis and Generalizations of the Linearized Bregman Method
SIAM Journal on Imaging Sciences
Recovering Low-Rank and Sparse Components of Matrices from Incomplete and Noisy Observations
SIAM Journal on Optimization
Exact matrix completion via convex optimization
Communications of the ACM
Semi-supervised learning with mixed knowledge information
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
A fast tri-factorization method for low-rank matrix recovery and completion
Pattern Recognition
Learning spectral embedding via iterative eigenvalue thresholding
Proceedings of the 21st ACM international conference on Information and knowledge management
Error Forgetting of Bregman Iteration
Journal of Scientific Computing
Accelerated Linearized Bregman Method
Journal of Scientific Computing
Semi-supervised learning with nuclear norm regularization
Pattern Recognition
Advances in Computational Mathematics
Recovering low-rank matrices from corrupted observations via the linear conjugate gradient algorithm
Journal of Computational and Applied Mathematics
Approximation of rank function and its application to the nearest low-rank correlation matrix
Journal of Global Optimization
A new framework for 3D face reconstruction for self-occluded images
International Journal of Computational Vision and Robotics
A reweighted nuclear norm minimization algorithm for low rank matrix recovery
Journal of Computational and Applied Mathematics
A Simple Prior-Free Method for Non-rigid Structure-from-Motion Factorization
International Journal of Computer Vision
Hi-index | 0.02 |
The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast as a semidefinite programming problem, such an approach is computationally expensive to solve when the matrices are large. In this paper, we propose fixed point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems (the code can be downloaded from http://www.columbia.edu/~sm2756/FPCA.htmfor non-commercial use). Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3. For example, our algorithm can recover 1000 × 1000 matrices of rank 50 with a relative error of 10−5 in about 3 min by sampling only 20% of the elements. We know of no other method that achieves as good recoverability. Numerical experiments on online recommendation, DNA microarray data set and image inpainting problems demonstrate the effectiveness of our algorithms.