Deterministic simulations and hierarchy theorems for randomized algorithms

  • Authors:
  • Dieter J. Melkebeek;Jeffrey J. Kinne

  • Affiliations:
  • The University of Wisconsin - Madison;The University of Wisconsin - Madison

  • Venue:
  • Deterministic simulations and hierarchy theorems for randomized algorithms
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present three research directions related to the question whether all randomized algorithms can be derandomized, i.e., simulated by deterministic algorithms with a small loss in efficiency. Typically-Correct Derandomization. A recent line of research has considered "typically correct" deterministic simulations of randomized algorithms, which are allowed to err on few inputs. These may be easier to obtain and/or be more efficient than full derandomizations that do not make mistakes. We develop a new approach for constructing typically-correct derandomizations and use our approach to obtain both conditional and unconditional typically-correct derandomization results in various algorithmic settings, including randomized decision procedures with bounded error (BPP). We also investigate whether typically-correct derandomization of BPP implies circuit lower bounds. We establish a positive answer for small error rates and in doing so provide a proof for the zero-error setting that is simpler and scales better than earlier arguments. Monotone Computations. Short of derandomizing all randomized algorithms, we can ask to derandomize more restricted classes of randomized algorithms. Because a strong connection has been proved between circuit lower bounds and derandomization, and there has been success proving worst-case circuit lower bounds for monotone circuits, randomized monotone computations are a natural candidate to consider. We show that, in fact, any derandomization of randomized monotone computations would derandomize all randomized algorithms, whether monotone or not. We prove similar results for pseudorandom generators and average-case hard functions. Hierarchy Theorems. For any computational model, a fundamental question is whether machines with more resources are strictly more powerful than machines with fewer resources. Such results are known as hierarchy theorems. The standard techniques for proving hierarchy theorems fail when applied to bounded-error randomized machines and other so-called "semantic" models of computation for which a machine must satisfy some promise to be valid. A recent line of work has made progress by proving time hierarchies for randomized and other semantic models that use one bit of advice. We adapt the techniques to the space-bounded setting, achieving results that are tight for typical space bounds between logarithmic and linear.