Everywhere-tight information cost tradeoffs for augmented index

  • Authors:
  • Amit Chakrabarti;Ranganath Kondapally

  • Affiliations:
  • Department of Computer Science, Dartmouth College, Hanover, NH;Department of Computer Science, Dartmouth College, Hanover, NH

  • Venue:
  • APPROX'11/RANDOM'11 Proceedings of the 14th international workshop and 15th international conference on Approximation, randomization, and combinatorial optimization: algorithms and techniques
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

For a variety of reasons, a number of recent works have studied the classic communication problem INDEX, and its variant AUGMENTED-INDEX, from a tradeoff perspective: how much communication can Alice (the player holding the n data bits) save if Bob (the player holding the index) communicates a nontrivial amount? Recently, Magniez et al. (STOC, 2010), Chakrabarti et al. (FOCS, 2010) and Jain and Nayak gave information cost tradeoffs for this problem, where the amount of communication is measured as the amount of information revealed by one player to the other. The latter two works showed that reducing Alice's communication to sublinear requires at least a constant amount of communication from Bob. Here, we show that the above result is just one point on a more general tradeoff curve. That is, we extend the earlier result to show that, for all b, either Bob reveals Ω(b) information to Alice, or else Alice reveals n/2O(b) information to Bob. This tradeoff lower bound is easily seen to be everywhere-tight, by virtue of an easy two-round deterministic protocol. Our lower bound applies to constant-error randomized protocols, with information measured under an "easy" distribution on inputs.