Probability and Computing: Randomized Algorithms and Probabilistic Analysis
Probability and Computing: Randomized Algorithms and Probabilistic Analysis
One sketch for all: fast algorithms for compressed sensing
Proceedings of the thirty-ninth annual ACM symposium on Theory of computing
Concentration of Measure for the Analysis of Randomized Algorithms
Concentration of Measure for the Analysis of Randomized Algorithms
Approximate sparse recovery: optimizing time and measurements
Proceedings of the forty-second ACM symposium on Theory of computing
Lower bounds for sparse recovery
SODA '10 Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms
IEEE Transactions on Information Theory
Efficient and reliable low-power backscatter networks
Proceedings of the ACM SIGCOMM 2012 conference on Applications, technologies, architectures, and protocols for computer communication
Efficient and reliable low-power backscatter networks
ACM SIGCOMM Computer Communication Review - Special october issue SIGCOMM '12
Homomorphic fingerprints under misalignments: sketching edit and shift distances
Proceedings of the forty-fifth annual ACM symposium on Theory of computing
ℓ2/ℓ2-Foreach sparse recovery with low risk
ICALP'13 Proceedings of the 40th international conference on Automata, Languages, and Programming - Volume Part I
Automated signature extraction for high volume attacks
ANCS '13 Proceedings of the ninth ACM/IEEE symposium on Architectures for networking and communications systems
Hi-index | 0.00 |
An approximate sparse recovery system in l1 norm makes a small number of measurements of a noisy vector with at most k large entries and recovers those heavy hitters approximately. Formally, it consists of parameters N, k, ε, an m-by-N measurement matrix, φ, and a decoding algorithm, D. Given a vector, x, where xk denotes the optimal k-term approximation to x, the system approximates x by [EQUATION], which must satisfy [EQUATION] Among the goals in designing such systems are minimizing the number m of measurements and the runtime of the decoding algorithm, D. We consider the "forall" model, in which a single matrix φ, possibly "constructed" non-explicitly using the probabilistic method, is used for all signals x. Many previous papers have provided algorithms for this problem. But all such algorithms that use the optimal number m = O(k log(N/k)) of measurements require superlinear time Ω(N log(N/k)). In this paper, we give the first algorithm for this problem that uses the optimum number of measurements (up to constant factors) and runs in sublinear time o(N) when k is sufficiently less than N. Specifically, for any positive integer l, our approach uses time O(l5ε-3k(N/k)1/l) and uses m = O(l8ε-3k log(N/k)) measurements, with access to a data structure requiring space and preprocessing time O(lNk0.2/ε).