Gradient descent with sparsification: an iterative algorithm for sparse recovery with restricted isometry property

  • Authors:
  • Rahul Garg;Rohit Khandekar

  • Affiliations:
  • IBM T. J. Watson Research Center, Yorktown Heights, NY;IBM T. J. Watson Research Center, Yorktown Heights, NY

  • Venue:
  • ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an algorithm for finding an s-sparse vector x that minimizes the square-error ∥y -- Φx∥2 where Φ satisfies the restricted isometry property (RIP), with isometric constant δ2s GraDeS (Gradient Descent with Sparsification) iteratively updates x as: [EQUATION] where γ 1 and Hs sets all but s largest magnitude coordinates to zero. GraDeS converges to the correct solution in constant number of iterations. The condition δ2s near-linear time algorithm is known. In comparison, the best condition under which a polynomial-time algorithm is known, is δ2s Our Matlab implementation of GraDeS outperforms previously proposed algorithms like Subspace Pursuit, StOMP, OMP, and Lasso by an order of magnitude. Curiously, our experiments also uncovered cases where L1-regularized regression (Lasso) fails but GraDeS finds the correct solution.