On the randomness complexity of efficient sampling

  • Authors:
  • Bella Dubrov;Yuval Ishai

  • Affiliations:
  • Technion;Technion

  • Venue:
  • Proceedings of the thirty-eighth annual ACM symposium on Theory of computing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the following question: Can every efficiently samplable distribution be efficiently sampled, up to a small statistical distance, using roughly as much randomness as the length of its output? Towards a study of this question we generalize the current theory of pseudorandomness and consider pseudorandom generators that fool non-boolean distinguishers (nb-PRGs). We show a link between nb-PRGs and a notion of function compression, introduced by Harnik and Naor [16]. (A compression algorithm for f should efficiently compress an input x in a way that will preserve the information needed to compute f(x).) By constructing nb-PRGs, we answer the above question affirmatively under the following types of assumptions:Cryptographic incompressibility assumptions (that are implied by, and seem weaker than, "exponential" cryptographic assumptions).Nisan-Wigderson style (average-case) incompressibility assumptions for polynomial-time computable functions.No assumptions are needed for answering our question affirmatively in the case of constant depth samplers.To complement the above, we extend an idea from [16] and establish the following win-win situation. If the answer to our main question is "no", then it is possible to construct a (weak variant of) collision-resistant hash function from any one-way permutation. The latter would be considered a surprising result, as a black-box construction of this type was ruled out by Simon [35].Finally, we present an application of nb-PRGs to information theoretic cryptography. Specifically, under any of the above assumptions, efficient protocols for information-theoretic secure multiparty computation never need to use (much) more randomness than communication.