Uncertainty principles and signal recovery
SIAM Journal on Applied Mathematics
Stable distributions, pseudorandom generators, embeddings, and data stream computation
Journal of the ACM (JACM)
Uncertainty principles, extractors, and explicit embeddings of l2 into l1
Proceedings of the thirty-ninth annual ACM symposium on Theory of computing
Deterministic constructions of compressed sensing matrices
Journal of Complexity
Euclidean Sections of $\ell_1^N$ with Sublinear Randomness and Error-Correction over the Reals
APPROX '08 / RANDOM '08 Proceedings of the 11th international workshop, APPROX 2008, and 12th international workshop, RANDOM 2008 on Approximation, Randomization and Combinatorial Optimization: Algorithms and Techniques
Near-Optimal Sparse Recovery in the L1 Norm
FOCS '08 Proceedings of the 2008 49th Annual IEEE Symposium on Foundations of Computer Science
Almost Euclidean subspaces of ℓ1N VIA expander codes
Combinatorica
Almost-Euclidean subspaces of l1Nvia tensor products: a simple approach to randomness reduction
APPROX/RANDOM'10 Proceedings of the 13th international conference on Approximation, and 14 the International conference on Randomization, and combinatorial optimization: algorithms and techniques
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
IEEE Transactions on Information Theory
Hi-index | 0.00 |
For any 00, we give an efficient deterministic construction of a linear subspace V ⊆ Rn, of dimension (1-ε)n in which the lp and lr norms are the same up to a multiplicative factor of poly(ε-1) (after proper normalization). As a corollary we get a deterministic compressed sensing algorithm (Basis Pursuit) for a new range of parameters. In particular, for any constant ε0 and pn - Rε n with the l1/lp guarantee for (n ⋅ poly(ε))-sparse vectors. Namely, let x be a vector in Rn whose l1 distance from a k-sparse vector (for some k=n ⋅ poly(ε)) is δ. The algorithm, given Ax as input, outputs an n dimensional vector y such that ||x-y||p ≤ δ k1/p-1. In particular this gives a weak form of the l2/l1 guarantee. Our construction has the additional benefit that when viewed as a matrix, A has at most O(1) non-zero entries in each row. As a result, both the encoding (computing Ax) and decoding (retrieving x from Ax) can be computed efficiently.