Hardness amplification via space-efficient direct products

  • Authors:
  • Venkatesan Guruswami;Valentine Kabanets

  • Affiliations:
  • Department of Computer Science & Engineering, University of Washington, Seattle, WA;School of Computing Science, Simon Fraser University, Vancouver, BC, Canada

  • Venue:
  • LATIN'06 Proceedings of the 7th Latin American conference on Theoretical Informatics
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We prove a version of the derandomized Direct Product Lemma for deterministic space-bounded algorithms. Suppose a Boolean function g:{0,1}n→{0,1} cannot be computed on more than 1–δ fraction of inputs by any deterministic time T and space S algorithm, where δ ≤ 1/t for some t. Then, for t-step walks w=(v1,..., vt) in some explicit d-regular expander graph on 2n vertices, the function g'(w) $\underset{=}{\rm def}$g(v1...g(vt)cannot be computed on more than 1–Ω(tδ) fraction of inputs by any deterministic time ≈ T/dt−poly(n) and space ≈ S – O(t). As an application, by iterating this construction, we get a deterministic linear-space “worst-case to constant average-case” hardness amplification reduction, as well as a family of logspace encodable/decodable error-correcting codes that can correct up to a constant fraction of errors. Logspace encodable/decodable codes (with linear-time encoding and decoding) were previously constructed by Spielman [14]. Our codes have weaker parameters (encoding length is polynomial, rather than linear), but have a conceptually simpler construction. The proof of our Direct Product Lemma is inspired by Dinur's remarkable recent proof of the PCP theorem by gap amplification using expanders [4].