Pseudo-random generators for all hardnesses
STOC '02 Proceedings of the thiry-fourth annual ACM symposium on Theory of computing
Derandomizing Arthur-Merlin Games under Uniform Assumptions
ISAAC '00 Proceedings of the 11th International Conference on Algorithms and Computation
Derandomizing polynomial identity tests means proving circuit lower bounds
Proceedings of the thirty-fifth annual ACM symposium on Theory of computing
Uniform hardness versus randomness tradeoffs for Arthur-Merlin games
Computational Complexity
Derandomizing polynomial identity tests means proving circuit lower bounds
Computational Complexity
Low-end uniform hardness vs. randomness tradeoffs for AM
Proceedings of the thirty-ninth annual ACM symposium on Theory of computing
Unions of disjoint NP-complete sets
COCOON'11 Proceedings of the 17th annual international conference on Computing and combinatorics
Computational complexity since 1980
FSTTCS '05 Proceedings of the 25th international conference on Foundations of Software Technology and Theoretical Computer Science
Hi-index | 0.00 |
We propose a new approach towards derandomization in the uniform setting, where it is computationally hard to find possible mistakes in the simulation of a given probabilistic algorithm. The approach consists in combining both easiness and hardness complexity assumptions: if a derandomization method based on an easiness assumption fails, then we obtain a certain hardness test that can be used to remove error in BPP algorithms. As an application, we prove that every RP algorithm can be simulated by a zero-error probabilistic algorithm, running in expected subexponential time, that appears correct infinitely often (i.o.) to every efficient adversary. A similar result by Impagliazzo and Wigderson (FOCS'98) states that BPP allows deterministic subexponential-time simulations that appear correct with respect to any efficiently sampleable distribution i.o. under the assumption that EXP\neq BPP; in contrast, our result does not rely on any unproven assumptions. As another application of our techniques, we get the following gap theorem for ZPP: a deterministic subexponential-time algorithm that appears correct i.o can simulate either every RP algorithm. To every efficient adversary, or EXP=ZPP. In particular, this implies that if ZPP is somewhat easy, e.g., ZPP\subseteq DTIME (2^{n^c}) for some fixed constant c, and then RP is subexponentially easy in the uniform setting described above.