Dense error correction via l1-minimization

  • Authors:
  • John Wright;Yi Ma

  • Affiliations:
  • Visual Computing Group, Microsoft Research Asia, Beijing, China;Visual Computing Group, Microsoft Research Asia, Beijing, China and Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL

  • Venue:
  • IEEE Transactions on Information Theory
  • Year:
  • 2010

Quantified Score

Hi-index 754.84

Visualization

Abstract

This paper studies the problem of recovering a sparse signal x ∈ Rn from highly corrupted linear measurements y = Ax + e ∈ Rm, where e is an unknown error vector whose nonzero entries may be unbounded. Motivated by an observation from face recognition in computer vision, this paper proves that for highly correlated (and possibly overcomplete) dictionaries A, any sufficiently sparse signal x can be recovered by solving an l1-minimization problem min ||x||1 + ||e||1 subject to y = Ax + e. More precisely, if the fraction of the support of the error e is bounded away from one and the support of x is a very small fraction of the dimension m, then as m becomes large the above l1-minimization succeeds for all signals x and almost all sign-and-support patterns of e. This result suggests that accurate recovery of sparse signals is possible and computationally feasible even with nearly 100% of the observations corrupted. The proof relies on a careful characterization of the faces of a convex polytope spanned together by the standard crosspolytope and a set of independent identically distributed (i.i.d.) Gaussian vectors with nonzero mean and small variance, dubbed the "cross-and-bouquet" (CAB) model. Simulations and experiments corroborate the findings, and suggest extensions to the result.