Randomness conductors and constant-degree lossless expanders
STOC '02 Proceedings of the thiry-fourth annual ACM symposium on Theory of computing
Acquiring Linear Subspaces for Face Recognition under Variable Lighting
IEEE Transactions on Pattern Analysis and Machine Intelligence
Almost Euclidean subspaces of ℓN1 via expander codes
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
Robust Face Recognition via Sparse Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Robust recovery of signals from a structured union of subspaces
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
The power of convex relaxation: near-optimal matrix completion
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory - Part 1
On sparse representations in arbitrary redundant bases
IEEE Transactions on Information Theory
Greed is good: algorithmic results for sparse approximation
IEEE Transactions on Information Theory
Decoding by linear programming
IEEE Transactions on Information Theory
LP Decoding Corrects a Constant Fraction of Errors
IEEE Transactions on Information Theory
A Decomposition Theory for Binary Linear Codes
IEEE Transactions on Information Theory
Beyond sparsity: The role of L1-optimizer in pattern classification
Pattern Recognition
Emerging topic detection using dictionary learning
Proceedings of the 20th ACM international conference on Information and knowledge management
Alternating Direction Algorithms for $\ell_1$-Problems in Compressive Sensing
SIAM Journal on Scientific Computing
Efficient point-to-subspace query in ℓ1 with application to robust face recognition
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
Robust subspace discovery via relaxed rank minimization
Neural Computation
Novel document detection for massive data streams using distributed dictionary learning
IBM Journal of Research and Development
Hi-index | 754.84 |
This paper studies the problem of recovering a sparse signal x ∈ Rn from highly corrupted linear measurements y = Ax + e ∈ Rm, where e is an unknown error vector whose nonzero entries may be unbounded. Motivated by an observation from face recognition in computer vision, this paper proves that for highly correlated (and possibly overcomplete) dictionaries A, any sufficiently sparse signal x can be recovered by solving an l1-minimization problem min ||x||1 + ||e||1 subject to y = Ax + e. More precisely, if the fraction of the support of the error e is bounded away from one and the support of x is a very small fraction of the dimension m, then as m becomes large the above l1-minimization succeeds for all signals x and almost all sign-and-support patterns of e. This result suggests that accurate recovery of sparse signals is possible and computationally feasible even with nearly 100% of the observations corrupted. The proof relies on a careful characterization of the faces of a convex polytope spanned together by the standard crosspolytope and a set of independent identically distributed (i.i.d.) Gaussian vectors with nonzero mean and small variance, dubbed the "cross-and-bouquet" (CAB) model. Simulations and experiments corroborate the findings, and suggest extensions to the result.