On the reconstruction of block-sparse signals with an optimal number of measurements

  • Authors:
  • Mihailo Stojnic;Farzad Parvaresh;Babak Hassibi

  • Affiliations:
  • School of Industrial Engineering, Purdue University, West Lafayette, IN;Center for Mathematics of Information, California Institute of Technology, Pasadena, CA;Department of Electrical Engineering, California Institute of Technology, Pasadena, CA

  • Venue:
  • IEEE Transactions on Signal Processing
  • Year:
  • 2009

Quantified Score

Hi-index 35.81

Visualization

Abstract

Let A be an M by N matrix (M N) which is an instance of a real random Gaussian ensemble. In compressed sensing we art interested in finding the sparsest solution to the system of equations Ax = y for a given y. In general, whenever the sparsity of x is smaller than half the dimension of y then with overwhelming probability over A the sparsest solution is unique and can be found by an exhaustive search over x with an exponential time complexity for any y. The recent work of Candés, Donoho, and Tao shows that minimization of the l1 norm of x subject to Ax = y results in the sparsest solution provided the sparsity of x, say K, is smaller than a certain threshold for a given number of measurements. Specifically, if the dimension of y approaches the dimension of x, the sparsity of x should be K n = N/d blocks where each block is of length d and is either a zero vector or a nonzero vector (under nonzero vector we consider a vector that can have both, zero and nonzero components). Instead of l1-norm relaxation, we consider the following relaxation: min ||X1||2 + ||X2||2 + ... + ||Xn||2, subject to Ax = y x where Xi = (X(i-1)d+1, X(i-1)d+2, ..., Xid)T for i = 1,2, ... , N. Our main result is that as n → ∞, (*) finds the sparsest solution to A x = y, with overwhelming probability in A, for any x whose sparsity is k/n O(ε), provided m/n 1 - 1/d, and d = Ω(log(1/ε)/ε3, The relaxation given in (*) can be solved in polynomial time using semidefinite programming.