On the Performance of Sparse Recovery Via $\ell_p$-Minimization $(0 \leq p \leq 1)$

  • Authors:
  • Meng Wang;Weiyu Xu;Ao Tang

  • Affiliations:
  • School of Electrical and Computer Engineering, Cornell University, Ithaca, NY, USA;School of Electrical and Computer Engineering, Cornell University, Ithaca, NY, USA;School of Electrical and Computer Engineering, Cornell University, Ithaca, NY, USA

  • Venue:
  • IEEE Transactions on Information Theory
  • Year:
  • 2011

Quantified Score

Hi-index 754.84

Visualization

Abstract

It is known that a high-dimensional sparse vector ${\bf x}^*$ in ${\cal R}^n$ can be recovered from low-dimensional measurements ${\bf y}=A{\bf x}^*$ where $A^{m \times n}(m is the measurement matrix. In this paper, with $A$ being a random Gaussian matrix, we investigate the recovering ability of $\ell_p$-minimization $(0\leq p \leq 1)$ as $p$ varies, where $\ell_p$ -minimization returns a vector with the least $\ell_p$ quasi-norm among all the vectors ${\bf x}$ satisfying $A{\bf x}={\bf y}$. Besides analyzing the performance of strong recovery where $\ell_p$-minimization is required to recover all the sparse vectors up to certain sparsity, we also for the first time analyze the performance of “weak” recovery of $\ell_p$ -minimization $(0\leq p where the aim is to recover all the sparse vectors on one support with a fixed sign pattern. When $\alpha (:={m \over n}) \rightarrow 1$, we provide sharp thresholds of the sparsity ratio (i.e., percentage of nonzero entries of a vector) that differentiates the success and failure via $\ell_p$ -minimization. For strong recovery, the threshold strictly decreases from $0.5$ to $0.239$ as $p$ increases from $0$ to $1$ . Surprisingly, for weak recovery, the threshold is $2/3$ for all $p$ in $[0,1)$, while the threshold is $1$ for $\ell_1$-minimization. We also explicitly demonstrate that $\ell_p$-minimization $(p can return a denser solution than $\ell_1$ -minimization. For any $\alpha \in (0,1)$, we provide bounds of the sparsity ratio for strong recovery and weak recovery, respectively, below which $\ell_p$-minimization succeeds. Our bound of strong recovery improves on the existing bounds when $\alpha$ is large. In particular, regarding the recovery threshold, this paper argues that $\ell_p$-minimization has a higher threshold with smaller $p$ for strong recovery; the threshold is the same for all $p$ for sectional recovery; and $\ell_1$ -minimization can outperform $\ell_p$-minimization for weak recovery. These are in contrast to traditional wisdom that $\ell_p$ -minimization, though computationally more expensive, always has better sparse recovery ability than $\ell_1$-minimization since it is closer to $\ell_0$-minimization. Finally, we provide an intuitive explanation to our findings. Numerical examples are also used to unambiguously confirm and illustrate the theoretical predictions.