Correcting Errors Beyond the Guruswami-Sudan Radius in Polynomial Time

  • Authors:
  • Farzad Parvaresh;Alexander Vardy

  • Affiliations:
  • University of California San Diego;University of California San Diego

  • Venue:
  • FOCS '05 Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science
  • Year:
  • 2005

Quantified Score

Hi-index 0.24

Visualization

Abstract

We introduce a new family of error-correcting codes that have a polynomial-time encoder and a polynomial-time listdecoder, correcting a fraction of adversarial errors up to \tau {\rm M} = 1 - ^{{\rm M} + 1} \sqrt {{\rm M}^{\rm M} R^{\rm M} } where R is the rate of the code and {\rm M} \ge is an arbitrary integer parameter. This makes it possible to decode beyond the Guruswami-Sudan radius of 1 - \sqrt R for all rates less than 1/16. Stated another way, for any \varepsilon 0, we can listdecode in polynomial time a fraction of errors up to 1 - \varepsilon with a code of length n and rate \Omega(\varepsilon/log(1/\varepsilon)), defined over an alphabet of size n^{\rm M}= n^{{\rm O}(\log (1/\varepsilon ))} . Notably, thiserror-correction is achieved in the worst-case against adversarial errors: a probabilistic model for the error distribution is neither needed nor assumed. The best results so far for polynomial-time list-decoding of adversarial errors required a rate of O(\varepsilon^2) to achieve the correction radius of 1-\varepsilon.Our codes and list-decoders are based on two key ideas. The first is the transition from bivariate polynomial interpolation, pioneered by Sudan and Guruswami-Sudan [12,22], to multivariate interpolation decoding. The second idea is to part ways with Reed-Solomon codes, for which numerous prior attempts [2, 3, 12, 18] at breaking the O(\varepsilon^2) rate barrier in the worst-case were unsuccessful. Rather than devising a better list-decoder for Reed-Solomon codes, we devise better codes. Standard Reed-Solomon encoders view a message as a polynomial f(X) over a field Fq, and produce the corresponding codeword by evaluating f(X) at n distinct elements of Fq. Herein, given f(X), we first compute one or more related polynomials g1(X), g2(X), . . . , gM-1(X) and produce the corresponding codeword by evaluating all these polynomials. Correlation between f (X) and gi(X), carefully designed into our encoder, then provides the additional information we need to recover the encoded message from the output of the multivariate interpolation process.