Sparse Approximate Solutions to Linear Systems
SIAM Journal on Computing
Convex Optimization
Exact Matrix Completion via Convex Optimization
Foundations of Computational Mathematics
Compressed sensing with cross validation
IEEE Transactions on Information Theory
The power of convex relaxation: near-optimal matrix completion
IEEE Transactions on Information Theory
Matrix completion from a few entries
IEEE Transactions on Information Theory
Matrix Completion from Noisy Entries
The Journal of Machine Learning Research
ADMiRA: atomic decomposition for minimum rank approximation
IEEE Transactions on Information Theory
A Singular Value Thresholding Algorithm for Matrix Completion
SIAM Journal on Optimization
Sparse representations and approximation theory
Journal of Approximation Theory
Null space conditions and thresholds for rank minimization
Mathematical Programming: Series A and B - Special Issue on "Optimization and Machine learning"; Alexandre d’Aspremont • Francis Bach • Inderjit S. Dhillon • Bin Yu
Convergence of Fixed-Point Continuation Algorithms for Matrix Rank Minimization
Foundations of Computational Mathematics
An almost optimal unrestricted fast Johnson-Lindenstrauss transform
Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms
A Simpler Approach to Matrix Completion
The Journal of Machine Learning Research
Sparse representations in unions of bases
IEEE Transactions on Information Theory
Recovering Low-Rank Matrices From Few Coefficients in Any Basis
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Iterative reweighted algorithms for matrix rank minimization
The Journal of Machine Learning Research
A reweighted nuclear norm minimization algorithm for low rank matrix recovery
Journal of Computational and Applied Mathematics
Hi-index | 0.00 |
We present and analyze an efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximately low-rank solution. Under the assumption that the linear measurements fulfill a suitable generalization of the null space property known in the context of compressed sensing, the algorithm is guaranteed to recover iteratively any matrix with an error of the order of the best $k$-rank approximation. In certain relevant cases, for instance, for the matrix completion problem, our version of this algorithm can take advantage of the Woodbury matrix identity, which allows us to expedite the solution of the least squares problems required at each iteration. We present numerical experiments which confirm the robustness of the algorithm for the solution of matrix completion problems, and we demonstrate its competitiveness with respect to other techniques proposed recently in the literature.