Tight bounds for depth-two superconcentrators

  • Authors:
  • J. Radhakrishnan;A. Ta-Shma

  • Affiliations:
  • -;-

  • Venue:
  • FOCS '97 Proceedings of the 38th Annual Symposium on Foundations of Computer Science
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

We show that the minimum size of a depth-two N-superconcentrator is /spl Theta/(Nlog/sup 2/N/loglogN). Before this work, optimal bounds were known for all depths except two. For the upper bound, we build superconcentrators by putting together a small number of disperser graphs; these disperser graphs are obtained using a probabilistic argument. We present two different methods for showing lower bounds. First, we show that superconcentrators contain several disjoint disperser graphs. When combined with the lower bound for disperser graphs due to Kovari, Sos and Turan, this gives an almost optimal lower bound of /spl Omega/(N(log N/loglog N)/sup 2/) on the size of N-superconcentrators. The second method, based on the work of Hansel (1964), gives the optimal lower bound. The method of the Kovari, Sos and Turan can be extended to give tight lower bounds for extractors, both in terms of the number of truly random bits needed to extract one additional bit and in terms of the unavoidable entropy loss in the system. If the input is an n-bit source with min-entropy /spl kappa/ and the output is required to be within a distance of E from uniform distribution, then to extract even a constant number of additional bits, one must invest at least log(n-/spl kappa/)+2 log(1//spl epsiv/)-O(1) truly random bits; to obtain m output bits one must invest at least m-/spl kappa/+2 log(1//spl epsiv/)-O(1). Thus, there is a loss of 2 log(1//spl epsiv/) bits during the extraction. Interestingly in the case of dispersers this loss in entropy is only about loglog(1//spl epsiv/).