Finding Frequent Items in Data Streams
ICALP '02 Proceedings of the 29th International Colloquium on Automata, Languages and Programming
A New Approach To Information Theory
STACS '94 Proceedings of the 11th Annual Symposium on Theoretical Aspects of Computer Science
Communication lower bounds for distributed-memory matrix multiplication
Journal of Parallel and Distributed Computing
Network coding: does the model need tuning?
SODA '05 Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms
List decoding and property testing of error-correcting codes
List decoding and property testing of error-correcting codes
Near-Optimal Sparse Recovery in the L1 Norm
FOCS '08 Proceedings of the 2008 49th Annual IEEE Symposium on Foundations of Computer Science
(1 + eps)-Approximate Sparse Recovery
FOCS '11 Proceedings of the 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science
Sublinear time, measurement-optimal, sparse recovery for all
Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms
Worst-case optimal join algorithms: [extended abstract]
PODS '12 Proceedings of the 31st symposium on Principles of Database Systems
Reliable communication under channel uncertainty
IEEE Transactions on Information Theory
Improved decoding of Reed-Solomon and algebraic-geometry codes
IEEE Transactions on Information Theory
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
IEEE Transactions on Information Theory
Explicit Codes Achieving List Decoding Capacity: Error-Correction With Optimal Redundancy
IEEE Transactions on Information Theory
Approximate Sparse Recovery: Optimizing Time and Measurements
SIAM Journal on Computing
Automated signature extraction for high volume attacks
ANCS '13 Proceedings of the ninth ACM/IEEE symposium on Architectures for networking and communications systems
Hi-index | 0.00 |
In this paper, we consider the "foreach" sparse recovery problem with failure probability p. The goal of the problem is to design a distribution over m ×N matrices Φ and a decoding algorithm A such that for every x∈ℝN, we have with probability at least 1−p$$\|\mathbf{x}-A(\Phi\mathbf{x})\|_2\leqslant C\|\mathbf{x}-\mathbf{x}_k\|_2,$$ where xk is the best k-sparse approximation of x. Our two main results are: (1) We prove a lower bound on m, the number measurements, of Ω(klog(n/k)+log(1/p)) for $2^{-\Theta(N)}\leqslant p sub-linear time decoding. Previous such results were obtained only when p=Ω(1). One corollary of our result is an an extension of Gilbert et al. [6] results for information-theoretically bounded adversaries. $$|\mathbf{x}-A(\Phi\mathbf{x})\|_2\leqslant C\|\mathbf{x}-\mathbf{x}_k\|_2,$$ where xk is the best k-sparse approximation of x. Our two main results are: (1) We prove a lower bound on m, the number measurements, of Ω(klog(n/k)+log(1/p)) for $2^{-\Theta(N)}\leqslant p sub-linear time decoding. Previous such results were obtained only when p=Ω(1). One corollary of our result is an an extension of Gilbert et al. [6] results for information-theoretically bounded adversaries. $$|\mathbf{x}-A(\Phi\mathbf{x})\|_2\leqslant C\|\mathbf{x}-\mathbf{x}_k\|_2,$$ where xk is the best k-sparse approximation of x. Our two main results are: (1) We prove a lower bound on m, the number measurements, of Ω(klog(n/k)+log(1/p)) for $2^{-\Theta(N)}\leqslant p sub-linear time decoding. Previous such results were obtained only when p=Ω(1). One corollary of our result is an an extension of Gilbert et al. [6] results for information-theoretically bounded adversaries. $$|\mathbf{x}-A(\Phi\mathbf{x})\|_2\leqslant C\|\mathbf{x}-\mathbf{x}_k\|_2,$$ where xk is the best k-sparse approximation of x. Our two main results are: (1) We prove a lower bound on m, the number measurements, of Ω(klog(n/k)+log(1/p)) for $2^{-\Theta(N)}\leqslant p sub-linear time decoding. Previous such results were obtained only when p=Ω(1). One corollary of our result is an an extension of Gilbert et al. [6] results for information-theoretically bounded adversaries.