Simulating independence: new constructions of condensers, ramsey graphs, dispersers, and extractors

  • Authors:
  • Boaz Barak;Guy Kindler;Ronen Shaltiel;Benny Sudakov;Avi Wigderson

  • Affiliations:
  • Institute for Advanced Study, Princeton, NJ;Institute for Advanced Study, Princeton, NJ;Haifa University, Haifa, Israel;Princeton University, Princeton, NJ;Institute for Advanced Study, Princeton, NJ

  • Venue:
  • Proceedings of the thirty-seventh annual ACM symposium on Theory of computing
  • Year:
  • 2005

Quantified Score

Hi-index 0.06

Visualization

Abstract

A distribution X over binary strings of length n has min-entropy k if every string has probability at most 2-k in X. We say that X is a δ-source if its rate k⁄n is at least δ.We give the following new explicit instructions (namely, poly(n)- time computable functions) of deterministicextractors, dispersers and related objects. All work for any fixed rate δ0. No previous explicit construction was known for either of these, for any δ‹1⁄2. The first two constitute major progress to very long-standing open problems. Bipartite Ramsey f1: (0,1)n)2 →0,1, such that for any two independent δ-sources X1, X2 we have f1(X1,X2) = 0,1 This implies a new explicit construction of 2N-vertex bipartite graphs where no induced Nδ by Nδ subgraph is complete or empty. Multiple source extraction f2: (0,1n)3→0,1 such that for any three independent δ-sources X1,X2,X3 we have that f2(X1,X2,X3) is (o(1)-close to being) an unbiased random bit. Constant seed condenser2 f3: n →(0,1m)c, such that for any δ-source X, one of the c output distributions f3(X)i, is a 0.9-source over 0,1m. Here c is a constant depending only on δ.Subspace Ramsey f4: 0,1n→0,1 such that for any affine-δ-source3 X we have f4(X)= 0,1.The constructions are quite involved and use as building blocks other new and known gadgets. But we can point out two important themes which recur in these constructions. One is that gadgets which were designed to work with independent inputs, sometimes perform well enough with correlated, high entropy inputs. The second is using the input to (introspectively) find high entropy regions within itself.