Matrix analysis
Topics in matrix analysis
Derivatives of spectral functions
Mathematics of Operations Research
Fast Monte Carlo Algorithms for Matrices II: Computing a Low-Rank Approximation to a Matrix
SIAM Journal on Computing
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Exact Matrix Completion via Convex Optimization
Foundations of Computational Mathematics
Interior-Point Method for Nuclear Norm Approximation with Application to System Identification
SIAM Journal on Matrix Analysis and Applications
Analysis of Multi-stage Convex Relaxation for Sparse Regularization
The Journal of Machine Learning Research
Low-rank matrix completion with noisy observations: a quantitative comparison
Allerton'09 Proceedings of the 47th annual Allerton conference on Communication, control, and computing
Noisy signal recovery via iterative reweighted L1-minimization
Asilomar'09 Proceedings of the 43rd Asilomar conference on Signals, systems and computers
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
The Journal of Machine Learning Research
ADMiRA: atomic decomposition for minimum rank approximation
IEEE Transactions on Information Theory
A Singular Value Thresholding Algorithm for Matrix Completion
SIAM Journal on Optimization
Convergence of Fixed-Point Continuation Algorithms for Matrix Rank Minimization
Foundations of Computational Mathematics
Robust principal component analysis?
Journal of the ACM (JACM)
An affine scaling methodology for best basis selection
IEEE Transactions on Signal Processing
Decoding by linear programming
IEEE Transactions on Information Theory
Low-rank Matrix Recovery via Iteratively Reweighted Least Squares Minimization
SIAM Journal on Optimization
Hi-index | 0.00 |
The problem of minimizing the rank of a matrix subject to affine constraints has applications in several areas including machine learning, and is known to be NP-hard. A tractable relaxation for this problem is nuclear norm (or trace norm) minimization, which is guaranteed to find the minimum rank matrix under suitable assumptions. In this paper, we propose a family of Iterative Reweighted Least Squares algorithms IRLS-p (with 0 ≤ p ≤ 1), as a computationally efficient way to improve over the performance of nuclear norm minimization. The algorithms can be viewed as (locally) minimizing certain smooth approximations to the rank function. When p = 1, we give theoretical guarantees similar to those for nuclear norm minimization, that is, recovery of low-rank matrices under certain assumptions on the operator defining the constraints. For p shows better empirical performance in terms of recovering low-rank matrices than nuclear norm minimization. We provide an efficient implementation for IRLS-p, and also present a related family of algorithms, sIRLS-p. These algorithms exhibit competitive run times and improved recovery when compared to existing algorithms for random instances of the matrix completion problem, as well as on the MovieLens movie recommendation data set.