How to generate cryptographically strong sequences of pseudo-random bits
SIAM Journal on Computing
One-way functions and Pseudorandom generators
Combinatorica - Theory of Computing
Expanders, randomness, or time versus space
Journal of Computer and System Sciences - Structure in Complexity Theory Conference, June 2-5, 1986
Simulating BPP using a general weak random source
SFCS '91 Proceedings of the 32nd annual symposium on Foundations of computer science
Why is Boolean complexity theory difficult?
Poceedings of the London Mathematical Society symposium on Boolean function complexity
Journal of Computer and System Sciences
BPP has subexponential time simulations unless EXPTIME has publishable proofs
Computational Complexity
Journal of Computer and System Sciences
P = BPP if E requires exponential circuits: derandomizing the XOR lemma
STOC '97 Proceedings of the twenty-ninth annual ACM symposium on Theory of computing
Decoding of Reed Solomon codes beyond the error-correction bound
Journal of Complexity
Fast Probabilistic Algorithms for Verification of Polynomial Identities
Journal of the ACM (JACM)
Pseudorandom generators without the XOR lemma
Journal of Computer and System Sciences - Special issue on the fourteenth annual IEE conference on computational complexity
Extractors and pseudorandom generators
Journal of the ACM (JACM)
Pseudo-random generators for all hardnesses
STOC '02 Proceedings of the thiry-fourth annual ACM symposium on Theory of computing
Probabilistic algorithms for sparse polynomials
EUROSAM '79 Proceedings of the International Symposiumon on Symbolic and Algebraic Computation
Randomness vs. Time: De-Randomization under a Uniform Assumption
FOCS '98 Proceedings of the 39th Annual Symposium on Foundations of Computer Science
In Search of an Easy Witness: Exponential Time vs. Probabilistic Polynomial Time
CCC '01 Proceedings of the 16th Annual Conference on Computational Complexity
Simple Extractors for All Min-Entropies and a New Pseudo-Random Generator
FOCS '01 Proceedings of the 42nd IEEE symposium on Foundations of Computer Science
Undirected ST-connectivity in log-space
Proceedings of the thirty-seventh annual ACM symposium on Theory of computing
Derandomizing polynomial identity tests means proving circuit lower bounds
Computational Complexity
Typically-correct derandomization
ACM SIGACT News
Hi-index | 0.00 |
Among the most important modern algorithmic techniques is the use of random decisions. Starting in the 1970's, many of the most significant results were randomized algorithms solving basic compuatational problems that had (to that time) resisted efficient deterministic computation. (Ber72, SS79, Rab80, Sch80, Zip79, AKLLR). In contrast, many of the most exciting recent work has been on derandomizing these same algorithms, coming up with efficient deterministic versions, e.g., (AKS02, Rein05). This raises the question, can such results be obtained for all randomized algorithms? Will the remaining classical randomized algorithms be derandomized by similar techniques?Clear but complicated answers to these questions have emerged from complexity-theoretic studies of randomized complexity classes (e.g., RP and BPP) and pseudo-random generators. These questions are inextricably linked to another basic problem in complexity: which functions require large circuits to compute?In this talk, we'll survey some results from the theory of derandomization. I'll stress connections to other questions, especially circuit complexity, explicit extractors, hardness amplification, and error-correcting codes. Much of the talk is based on joint work with Valentine Kabanets and Avi Wigderson, but it will also include results by many other researchers.A priori, possibilities concerning the power of randomized algorithms include:Randomization always helps speed up intractable problems, i.e., EXP=BPP.The extent to which randomization helps is problem-specific. Depending on the problem, it can reduce complexity by any amount from not at all to exponentially.True randomness is never needed, and random choices can always be simulated deterministically, i.e., P=BPP..Either of the last two possibilities seem plausible, but most consider the first wildly implausible. However, while a strong version of the middle possibility has been ruled out, the implausible first one is still open. Recent results indicate both that the last, P=BPP, is both very likely to be the case and very difficult to prove.More precisely: Either no problem in E has strictly exponential circuit complexity or P=BPP. This seems to be strong evidence that, in fact, P=BPP, since otherwise circuits can always shortcut computation time for hard problems. (NW, BFNW, IW97, STV01, SU01, Uma02).Either BPP=EXP, or any problem in BPP has a deterministic sub-exponential time algorithm that works on almost all instances. In other words, either randomness solves every hard problem, or it does not help exponentially, except on rare instances. This rules out strong problem-dependence, since if randomization helps exponentially for many instances of some problem, we can conclude that it helps exponentially for all intractible problems. (IW98). If RP=P , then either the permanent problem requires super-polynomial algebraic circuits or there is a problem in NEXP that has no polynomial-size Boolean circuit. (IKW01, KI). That is, proving the last possibility requires one to prove a new circuit lower bound, and so is likely to be difficult. (Moreover, we do not need the full hypothesis that P=RP to obtain the same conclusion: it actually suffices that the Schwartz-Zippel identity testing algorithm be derandomizable. Thus, we will not be able to derandomize even the "classic" algorithms without proving circuit lower bounds.)All of these results use the hardness-vs-randomness paradigm introduced by Yao (Yao82, see also BM, Levin): Use a hard computational problem to define a small set of "pseudo-random" strings, that no limited adversary can distinguish from random. Use these "pseudo-random" strings to replace the random choices in a probabilistic algorithm. The algorithm will not have enough time to distinguish the pseudo-random sequences from truly random ones, and so will behave the same as it would given random sequences.