Easiness Assumptions and Hardness Tests: Trading Time for Zero Error

  • Authors:
  • Valentine Kabanets

  • Affiliations:
  • -

  • Venue:
  • COCO '00 Proceedings of the 15th Annual IEEE Conference on Computational Complexity
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a new approach towards derandomization in the uniform setting, where it is computationally hard to find possible mistakes in the simulation of a given probabilistic algorithm. The approach consists in combining both easiness and hardness complexity assumptions: if a derandomization method based on an easiness assumption fails, then we obtain a certain hardness test that can be used to remove error in BPP algorithms. As an application, we prove that every RP algorithm can be simulated by a zero-error probabilistic algorithm, running in expected subexponential time, that appears correct infinitely often (i.o.) to every efficient adversary. A similar result by Impagliazzo and Wigderson (FOCS'98) states that BPP allows deterministic subexponential-time simulations that appear correct with respect to any efficiently sampleable distribution i.o. under the assumption that EXP\neq BPP; in contrast, our result does not rely on any unproven assumptions. As another application of our techniques, we get the following gap theorem for ZPP: a deterministic subexponential-time algorithm that appears correct i.o can simulate either every RP algorithm. To every efficient adversary, or EXP=ZPP. In particular, this implies that if ZPP is somewhat easy, e.g., ZPP\subseteq DTIME (2^{n^c}) for some fixed constant c, and then RP is subexponentially easy in the uniform setting described above.