Constructive proofs of concentration bounds
APPROX/RANDOM'10 Proceedings of the 13th international conference on Approximation, and 14 the International conference on Randomization, and combinatorial optimization: algorithms and techniques
Hardness Amplification Proofs Require Majority
SIAM Journal on Computing
ICALP'11 Proceedings of the 38th international colloquim conference on Automata, languages and programming - Volume Part I
Studies in complexity and cryptography
APPROX'11/RANDOM'11 Proceedings of the 14th international workshop and 15th international conference on Approximation, randomization, and combinatorial optimization: algorithms and techniques
Resource-based corruptions and the combinatorics of hidden diversity
Proceedings of the 4th conference on Innovations in Theoretical Computer Science
Hi-index | 0.00 |
The classical direct product theorem for circuits says that if a Boolean function $f:\{0,1\}^n\to\{0,1\}$ is somewhat hard to compute on average by small circuits, then the corresponding $k$-wise direct product function $f^k(x_1,\dots,x_k)=(f(x_1),\dots,f(x_k))$ (where each $x_i\in\{0,1\}^n$) is significantly harder to compute on average by slightly smaller circuits. We prove a fully uniform version of the direct product theorem with information-theoretically optimal parameters, up to constant factors. Namely, we show that for given $k$ and $\epsilon$, there is an efficient randomized algorithm $A$ with the following property. Given a circuit $C$ that computes $f^k$ on at least $\epsilon$ fraction of inputs, the algorithm $A$ outputs with probability at least $3/4$ a list of $O(1/\epsilon)$ circuits such that at least one of the circuits on the list computes $f$ on more than $1-\delta$ fraction of inputs, for $\delta=O((\log1/\epsilon)/k)$; moreover, each output circuit is an $\mathsf{AC}^0$ circuit (of size $\mathrm{poly}(n,k,\log1/\delta,1/\epsilon)$), with oracle access to the circuit $C$. Using the Goldreich-Levin decoding algorithm [O. Goldreich and L. A. Levin, A hard-core predicate for all one-way functions, in Proceedings of the Twenty-First Annual ACM Symposium on Theory of Computing, Seattle, 1989, pp. 25-32], we also get a fully uniform version of Yao's XOR lemma [A. C. Yao, Theory and applications of trapdoor functions, in Proceedings of the Twenty-Third Annual IEEE Symposium on Foundations of Computer Science, Chicago, 1982, pp. 80-91] with optimal parameters, up to constant factors. Our results simplify and improve those in [R. Impagliazzo, R. Jaiswal, and V. Kabanets, Approximately list-decoding direct product codes and uniform hardness amplification, in Proceedings of the Forty-Seventh Annual IEEE Symposium on Foundations of Computer Science, Berkeley, CA, 2006, pp. 187-196]. Our main result may be viewed as an efficient approximate, local, list-decoding algorithm for direct product codes (encoding a function by its values on all $k$-tuples) with optimal parameters. We generalize it to a family of “derandomized” direct product codes, which we call intersection codes, where the encoding provides values of the function only on a subfamily of $k$-tuples. The quality of the decoding algorithm is then determined by sampling properties of the sets in this family and their intersections. As a direct consequence of this generalization we obtain the first derandomized direct product result in the uniform setting, allowing hardness amplification with only constant (as opposed to a factor of $k$) increase in the input length. Finally, this general setting naturally allows the decoding of concatenated codes, which further yields nearly optimal derandomized amplification.