SIAM Journal on Control and Optimization
Shape and motion from image streams under orthography: a factorization method
International Journal of Computer Vision
Journal of Optimization Theory and Applications
A Modified Forward-Backward Splitting Method for Maximal Monotone Mappings
SIAM Journal on Control and Optimization
SIAM Journal on Optimization
Smoothing Functions for Second-Order-Cone Complementarity Problems
SIAM Journal on Optimization
Signal Processing - Image and Video Coding beyond Standards
Wavelet Algorithms for High-Resolution Image Reconstruction
SIAM Journal on Scientific Computing
Convex Optimization
Deconvolution: a wavelet frame approach
Numerische Mathematik
Uncovering shared structures in multiclass classification
Proceedings of the 24th international conference on Machine learning
Functions of Matrices: Theory and Computation (Other Titles in Applied Mathematics)
Functions of Matrices: Theory and Computation (Other Titles in Applied Mathematics)
Restoration of Chopped and Nodded Images by Framelets
SIAM Journal on Scientific Computing
Inpainting and Zooming Using Sparse Representations
The Computer Journal
Sparse reconstruction by separable approximation
IEEE Transactions on Signal Processing
Fixed-Point Continuation for $\ell_1$-Minimization: Methodology and Convergence
SIAM Journal on Optimization
Bregman Iterative Algorithms for $\ell_1$-Minimization with Applications to Compressed Sensing
SIAM Journal on Imaging Sciences
Linearized Bregman Iterations for Frame-Based Image Deblurring
SIAM Journal on Imaging Sciences
The Split Bregman Method for L1-Regularized Problems
SIAM Journal on Imaging Sciences
Exact Matrix Completion via Convex Optimization
Foundations of Computational Mathematics
Interior-Point Method for Nuclear Norm Approximation with Application to System Identification
SIAM Journal on Matrix Analysis and Applications
Fixed point and Bregman iterative methods for matrix rank minimization
Mathematical Programming: Series A and B
An implementable proximal point algorithmic framework for nuclear norm minimization
Mathematical Programming: Series A and B
Decoding by linear programming
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
IEEE Transactions on Information Theory
Recovering the missing components in a large noisy low-rank matrix: application to SFM
IEEE Transactions on Pattern Analysis and Machine Intelligence
An EM algorithm for wavelet-based image restoration
IEEE Transactions on Image Processing
Multi-source learning for joint analysis of incomplete multi-modality neuroimaging data
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Hi-index | 0.00 |
This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices $\{\boldsymbol{X}^k,\boldsymbol{Y}^k\}$, and at each step mainly performs a soft-thresholding operation on the singular values of the matrix $\boldsymbol{Y}^k$. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates $\{\boldsymbol{X}^k\}$ is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence of iterates converges. On the practical side, we provide numerical examples in which $1,000\times1,000$ matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with the recent literature on linearized Bregman iterations for $\ell_1$ minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.