Hardness amplification proofs require majority
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
The Complexity of Local List Decoding
APPROX '08 / RANDOM '08 Proceedings of the 11th international workshop, APPROX 2008, and 12th international workshop, RANDOM 2008 on Approximation, Randomization and Combinatorial Optimization: Algorithms and Techniques
Guest Column: correlation bounds for polynomials over {0 1}
ACM SIGACT News
Bit-probe lower bounds for succinct data structures
Proceedings of the forty-first annual ACM symposium on Theory of computing
On the Power of Small-Depth Computation
Foundations and Trends® in Theoretical Computer Science
Hardness Amplification Proofs Require Majority
SIAM Journal on Computing
On the complexity of hard-core set constructions
ICALP'07 Proceedings of the 34th international conference on Automata, Languages and Programming
Hi-index | 0.00 |
This thesis studies the interplay between randomness and computation. We investigate this interplay from the perspectives of hardness amplification and derandomization. Hardness amplification is the task of taking a function that is hard to compute on some input or on some fraction of inputs, and producing a new function that is very hard on average, i.e. hard to compute on a fraction of inputs that is as large as possible. Hardness amplification is an important step toward understanding average-case hardness, and is also motivated by modern cryptography, which for the most part relies on the existence of a very average-case hard function in NP. Our results in this area include the following: (1) We show that if NP contains a function that is hard to compute on a constant fraction of inputs then NP contains a function that is hard to compute on a fraction of inputs that is exponentially close to one-half, as opposed to polynomially close to one-half in previous work. (2) We show that there is no black-box construction of an average-case hard function in NP starting from a worst-case hard function. Derandomization studies the possibility of removing randomness from probabilistic algorithms. Its study is key to understanding the power of randomness in computation, and has recently led to several algorithmic breakthroughs. Our contributions to this area include the following: (1) We construct a new pseudorandom generator that stretches a random seed into a much longer sequence that looks random to any small constant-depth circuit with a few arbitrary symmetric gates, such as Parity or Majority. (2) We show that any black-box simulation of randomized polynomial time in the second level of the polynomial-time hierarchy must incur a quadratic slow-down in the running time, which matches the running time of known simulations. We also exhibit a quasilinear-time simulation at the third level of the polynomial-time hierarchy.