Unbiased bits from sources of weak randomness and probabilistic communication complexity
SIAM Journal on Computing - Special issue on cryptography
Pseudo-random generation from one-way functions
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
Tiny families of functions with random properties: a quality-size trade-off for hashing
Proceedings of the workshop on Randomized algorithms and computation
A Pseudorandom Generator from any One-way Function
SIAM Journal on Computing
Extractors: optimal up to constant factors
Proceedings of the thirty-fifth annual ACM symposium on Theory of computing
Simple extractors for all min-entropies and a new pseudorandom generator
Journal of the ACM (JACM)
Correcting Errors Beyond the Guruswami-Sudan Radius in Polynomial Time
FOCS '05 Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science
Extracting Randomness via Repeated Condensing
SIAM Journal on Computing
Unbalanced Expanders and Randomness Extractors from Parvaresh-Vardy Codes
CCC '07 Proceedings of the Twenty-Second Annual IEEE Conference on Computational Complexity
An Improved Analysis of Linear Mergers
Computational Complexity
Extensions to the Method of Multiplicities, with Applications to Kakeya Sets and Mergers
FOCS '09 Proceedings of the 2009 50th Annual IEEE Symposium on Foundations of Computer Science
New affine-invariant codes from lifting
Proceedings of the 4th conference on Innovations in Theoretical Computer Science
Hi-index | 0.00 |
A merger is a probabilistic procedure which extracts the randomness out of any (arbitrarily correlated) set of random variables, as long as one of them is uniform. Our main result is an efficient, simple, and optimal (to constant factors) merger, which, for $k$ random variables on $n$ bits each, uses an $O(\log(nk))$ seed, and whose error is $1/nk$. Our merger can be viewed as a derandomized version of the merger of Lu et al. [Extractors: Optimal up to constant factors, in Proceedings of the 35th Annual ACM Symposium on Theory of Computing, ACM, New York, 2003, pp. 602-611]. Its analysis generalizes the recent resolution of the Kakeya problem in finite fields of Dvir [J. Amer. Math. Soc., 22 (2009), pp. 1093-1097]. Following the plan set forth by Ta-Shma [Refining Randomness, Ph.D. thesis, The Hebrew University, Jerusalem, Israel, 1996] who defined mergers as part of this plan, our merger provides the last “missing link” to a simple and modular construction of extractors for all entropies, which is optimal to constant factors in all parameters. This complements the elegant construction of such extractors given by Guruswami, Umans, and Vadhan [Unbalanced expanders and randomness extractors from Parvaresh-Vardy codes, in CCC '07: Proceedings of the Twenty-Second Annual IEEE Conference on Computaional Complexity, IEEE Computer Society, Washington, DC, 2007, pp. 96-108]. We also give simple extensions of our merger in two directions. First, we generalize it to handle the case where no source is uniform—in that case the merger will extract the entropy present in the most random of the given sources. Second, we observe that the merger works just as well in the computational setting, when the sources are efficiently samplable, and computational notions of entropy replace the information theoretic ones.