Hardness Amplification via Space-Efficient Direct Products

  • Authors:
  • Venkatesan Guruswami;Valentine Kabanets

  • Affiliations:
  • University of Washington, Department of Computer Science and Engineering, Seattle, USA;Simon Fraser University, School of Computing Science, Vancouver, Canada

  • Venue:
  • Computational Complexity
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We prove a version of the derandomized Direct Product Lemma for deterministic space-bounded algorithms. Suppose a Boolean function $$g : \{0, 1\}^{n} \rightarrow \{0, 1\}$$cannot be computed on more than a fraction 1 − δ of inputs by any deterministic time T and space S algorithm, where δ ≤ 1/t for some t. Then for t-step walks w = (v 1, . . . , v t ) in some explicit d-regular expander graph on 2n vertices, the function $$g^\prime(w) {\mathop = \limits ^{\rm def}} (g(v_1), . . . , g(v_t))$$cannot be computed on more than a fraction 1 − Ω(tδ) of inputs by any deterministic time ≈ T/d t  − poly(n) and space ≈ S − O(t) algorithm. As an application, by iterating this construction, we get a deterministic linear-space “worst-case to constant average-case” hardness amplification reduction, as well as a family of logspace encodable/decodable error-correcting codes that can correct up to a constant fraction of errors. Logspace encodable/decodable codes (with linear-time encoding and decoding) were previously constructed by Spielman (1996). Our codes have weaker parameters (encoding length is polynomial, rather than linear), but have a conceptually simpler construction. The proof of our Direct Product lemma is inspired by Dinur’s remarkable proof of the PCP theorem by gap amplification using expanders (Dinur 2006).