Deterministic compressed sensing

  • Authors:
  • A. Robert Calderbank;Robert Schapire;Sina Jafarpour

  • Affiliations:
  • Princeton University;Princeton University;Princeton University

  • Venue:
  • Deterministic compressed sensing
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The central goal of compressed sensing is to capture attributes of a signal using very few measurements. The initial publications by Donoho and by Candes and Tao have been followed by applications to image compression, data streaming, medical signal processing, digital communication and many others. The emphasis has been on random sensing but the limitations of this framework include performance guarantees, storage requirements, and computational cost. This thesis will describe two deterministic alternatives. The first alternative is based on expander graphs. We first show how expander graphs are appropriate for compressed sensing in terms of providing explicit and efficient sensing matrices as well as simple and efficient recovery algorithms. We show that by reformulating signal reconstruction as a zero-sum game we can efficiently recover any sparse vector. We provide a saddle-point reformulation of the expander-based sparse approximation problem, and propose an efficient expander-based sparse approximation algorithm, called the GAME algorithm. We show that the restricted isometry property of expander matrices in the ℓ1-norm ensures that the GAME algorithm always recovers a sparse approximation to the optimal solution with an ℓ1/ℓ 1 data-domain approximation guarantee. We also demonstrate resilience to Poisson noise. The Poisson noise model is appropriate for a variety of applications, including low-light imaging and digital streaming, where the signal-independent and/or bounded noise models used in the compressed sensing literature are no longer applicable. We develop a novel sensing paradigm based on expander graphs and propose a MAP algorithm for recovering sparse or compressible signals from Poisson observations. We support our results with experimental demonstrations of reconstructing average packet arrival rates and instantaneous packet counts at a router in a communication network, where the arrivals of packets in each flow follow a Poisson process. The second alternative is based on error correcting codes. We show that deterministic sensing matrices based on second order Reed Muller codes optimize average case performance. We also describe a very simple algorithm, one-step thresholding, that succeeds in average case model selection and sparse approximation, where more sophisticated algorithms, developed in the context of random sensing, fail completely. Finally, we provide an algorithmic framework for structured sparse recovery, where some extra prior knowledge about the sparse vector is also available. Our algorithm, called Nesterov Iterative Hard-Thresholding (NIHT) uses the gradient information in the convex data error objective to navigate over the non-convex set of structured sparse signals. Experiments show however that NIHT can empirically outperform ℓ1-minimization and other state-of-the-art convex optimization-based algorithms in sparse recovery.