Decoding error-correcting codes via linear programming

  • Authors:
  • Jon Feldman;David R. Karger

  • Affiliations:
  • -;-

  • Venue:
  • Decoding error-correcting codes via linear programming
  • Year:
  • 2003

Quantified Score

Hi-index 0.30

Visualization

Abstract

Error-correcting codes are fundamental tools used to transmit digital information over unreliable channels. Their study goes back to the work of Hamming [Ham50] and Shannon [Sha48], who used them as the basis for the field of information theory. The problem of decoding the original information up to the full error-correcting potential of the system is often very complex, especially for modern codes that approach the theoretical limits of the communication channel. In this thesis we investigate the application of linear programming (LP) relaxation to the problem of decoding an error-correcting code. Linear programming relaxation is a standard technique in approximation algorithms and operations research, and is central to the study of efficient algorithms to find good (albeit suboptimal) solutions to very difficult optimization problems. Our new “LP decoders” have tight combinatorial characterizations of decoding success that can be used to analyze error-correcting performance. Furthermore, LP decoders have the desirable (and rare) property that whenever they output a result, it is guaranteed to be the optimal result: the most likely (ML) information sent over the channel. We refer to this property as the ML certificate property. We provide specific LP decoders for two major families of codes: turbo codes and low-density parity-check (LDPC) codes. These codes have received a great deal of attention recently due to their unprecedented error-correcting performance. Our decoder is particularly attractive for analysis of these codes because the standard message-passing algorithms used for decoding are often difficult to analyze. For turbo codes, we give a relaxation very close to min-cost flow, and show that the success of the decoder depends on the costs in a certain residual graph. For the case of rate-1/2 repeat-accumulate codes (a certain type of turbo code), we give an inverse polynomial upper bound on the probability of decoding failure. For LDPC codes (or any binary linear code), we give a relaxation based on the factor graph representation of the code. We introduce the concept of fractional distance, which is a function of the relaxation, and show that LP decoding always corrects a number of errors up to half the fractional distance. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.) (Abstract shortened by UMI.)