Sparse Approximate Solutions to Linear Systems
SIAM Journal on Computing
Matrix computations (3rd ed.)
Atomic Decomposition by Basis Pursuit
SIAM Review
Sparse solutions for linear prediction problems
Sparse solutions for linear prediction problems
Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit
Foundations of Computational Mathematics
Decoding by linear programming
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
IEEE Transactions on Information Theory
Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit
IEEE Transactions on Information Theory
Efficient and numerically stable sparse learning
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part III
A box constrained gradient projection algorithm for compressed sensing
Signal Processing
Hard Thresholding Pursuit: An Algorithm for Compressive Sensing
SIAM Journal on Numerical Analysis
Sparsity lower bounds for dimensionality reducing maps
Proceedings of the forty-fifth annual ACM symposium on Theory of computing
Iterative reweighted algorithms for matrix rank minimization
The Journal of Machine Learning Research
Matrix Recipes for Hard Thresholding Methods
Journal of Mathematical Imaging and Vision
Hi-index | 0.00 |
We present an algorithm for finding an s-sparse vector x that minimizes the square-error ∥y -- Φx∥2 where Φ satisfies the restricted isometry property (RIP), with isometric constant δ2s GraDeS (Gradient Descent with Sparsification) iteratively updates x as: [EQUATION] where γ 1 and Hs sets all but s largest magnitude coordinates to zero. GraDeS converges to the correct solution in constant number of iterations. The condition δ2s near-linear time algorithm is known. In comparison, the best condition under which a polynomial-time algorithm is known, is δ2s Our Matlab implementation of GraDeS outperforms previously proposed algorithms like Subspace Pursuit, StOMP, OMP, and Lasso by an order of magnitude. Curiously, our experiments also uncovered cases where L1-regularized regression (Lasso) fails but GraDeS finds the correct solution.