Lower Bounds for the Noisy Broadcast Problem

  • Authors:
  • Navin Goyal;Guy Kindler;Michael Saks

  • Affiliations:
  • -;-;-

  • Venue:
  • SIAM Journal on Computing
  • Year:
  • 2008

Quantified Score

Hi-index 0.06

Visualization

Abstract

We prove the first nontrivial (superlinear) lower bound in the noisy broadcast model, defined by El Gamal in [Open problems presented at the $1984$ workshop on Specific Problems in Communication and Computation sponsored by Bell Communication Research, in Open Problems in Communication and Computation, T. M. Cover and B. Gopinath, eds., Springer-Verlag, New York, 1987, pp. 60-62]. In this model there are $n+1$ processors $P_0,P_1,\ldots,P_n$, each of which is initially given a private input bit $x_i$. The goal is for $P_0$ to learn the value of $f(x_1,\ldots,x_n)$, for some specified function $f$, using a series of noisy broadcasts. At each step a designated processor broadcasts one bit to all of the other processors, and the bit received by each processor is flipped with fixed probability (independently for each recipient). In 1988, Gallager [IEEE Trans. Inform. Theory, 34 (1988), pp. 176-180] gave a noise-resistant protocol that allows $P_0$ to learn the entire input with constant probability in $O(n\log\log n)$ broadcasts. We prove that Gallager's protocol is optimal, up to a constant factor. Our lower bound follows by reduction from a lower bound for generalized noisy decision trees, a new model which may be of independent interest. For this new model we show a lower bound of $\Omega(n \log n)$ on the depth of a tree that learns the entire input. While the above lower bound is for an $n$-bit function, we also show an $\Omega(n\log\log n)$ lower bound for the number of broadcasts required to compute certain explicit boolean-valued functions, when the correct output must be attained with probability at least $1-n^{-\alpha}$ for a constant parameter $\alpha0$ (this bound applies to all threshold functions as well as any other boolean-valued function with linear sensitivity). This bound also follows by reduction from a lower bound of $\Omega(n\log n)$ on the depth of generalized noisy decision trees that compute the same functions with the same error. We also show a (nontrivial) $\Omega(n)$ lower bound on the depth of generalized noisy decision trees that compute such functions with small constant error. Finally, we show the first protocol in the noisy broadcast model that computes the Hamming weight of the input using a linear number of broadcasts.