Can every randomized algorithm be derandomized?

  • Authors:
  • Russell Impagliazzo

  • Affiliations:
  • UCSD

  • Venue:
  • Proceedings of the thirty-eighth annual ACM symposium on Theory of computing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Among the most important modern algorithmic techniques is the use of random decisions. Starting in the 1970's, many of the most significant results were randomized algorithms solving basic compuatational problems that had (to that time) resisted efficient deterministic computation. (Ber72, SS79, Rab80, Sch80, Zip79, AKLLR). In contrast, many of the most exciting recent work has been on derandomizing these same algorithms, coming up with efficient deterministic versions, e.g., (AKS02, Rein05). This raises the question, can such results be obtained for all randomized algorithms? Will the remaining classical randomized algorithms be derandomized by similar techniques?Clear but complicated answers to these questions have emerged from complexity-theoretic studies of randomized complexity classes (e.g., RP and BPP) and pseudo-random generators. These questions are inextricably linked to another basic problem in complexity: which functions require large circuits to compute?In this talk, we'll survey some results from the theory of derandomization. I'll stress connections to other questions, especially circuit complexity, explicit extractors, hardness amplification, and error-correcting codes. Much of the talk is based on joint work with Valentine Kabanets and Avi Wigderson, but it will also include results by many other researchers.A priori, possibilities concerning the power of randomized algorithms include:Randomization always helps speed up intractable problems, i.e., EXP=BPP.The extent to which randomization helps is problem-specific. Depending on the problem, it can reduce complexity by any amount from not at all to exponentially.True randomness is never needed, and random choices can always be simulated deterministically, i.e., P=BPP..Either of the last two possibilities seem plausible, but most consider the first wildly implausible. However, while a strong version of the middle possibility has been ruled out, the implausible first one is still open. Recent results indicate both that the last, P=BPP, is both very likely to be the case and very difficult to prove.More precisely: Either no problem in E has strictly exponential circuit complexity or P=BPP. This seems to be strong evidence that, in fact, P=BPP, since otherwise circuits can always shortcut computation time for hard problems. (NW, BFNW, IW97, STV01, SU01, Uma02).Either BPP=EXP, or any problem in BPP has a deterministic sub-exponential time algorithm that works on almost all instances. In other words, either randomness solves every hard problem, or it does not help exponentially, except on rare instances. This rules out strong problem-dependence, since if randomization helps exponentially for many instances of some problem, we can conclude that it helps exponentially for all intractible problems. (IW98). If RP=P , then either the permanent problem requires super-polynomial algebraic circuits or there is a problem in NEXP that has no polynomial-size Boolean circuit. (IKW01, KI). That is, proving the last possibility requires one to prove a new circuit lower bound, and so is likely to be difficult. (Moreover, we do not need the full hypothesis that P=RP to obtain the same conclusion: it actually suffices that the Schwartz-Zippel identity testing algorithm be derandomizable. Thus, we will not be able to derandomize even the "classic" algorithms without proving circuit lower bounds.)All of these results use the hardness-vs-randomness paradigm introduced by Yao (Yao82, see also BM, Levin): Use a hard computational problem to define a small set of "pseudo-random" strings, that no limited adversary can distinguish from random. Use these "pseudo-random" strings to replace the random choices in a probabilistic algorithm. The algorithm will not have enough time to distinguish the pseudo-random sequences from truly random ones, and so will behave the same as it would given random sequences.