Good error-correcting codes based on very sparse matrices

  • Authors:
  • D. J.C. MacKay

  • Affiliations:
  • Cavendish Lab., Cambridge Univ.

  • Venue:
  • IEEE Transactions on Information Theory
  • Year:
  • 2006

Quantified Score

Hi-index 755.87

Visualization

Abstract

We study two families of error-correcting codes defined in terms of very sparse matrices. “MN” (MacKay-Neal (1995)) codes are recently invented, and “Gallager codes” were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties. The decoding of both codes can be tackled with a practical sum-product algorithm. We prove that these codes are “very good”, in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit. This result holds not only for the binary-symmetric channel but also for any channel with symmetric stationary ergodic noise. We give experimental results for binary-symmetric channels and Gaussian channels demonstrating that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed, the performance of Gallager codes is almost as close to the Shannon limit as that of turbo codes