Highly-parallel decoding architectures for convolutional turbo codes

  • Authors:
  • Zhiyong He;Paul Fortier;Sébastien Roy

  • Affiliations:
  • Department of Electrical and Computer Engineering, Laval University, Quebec City, Canada;Department of Electrical and Computer Engineering, Laval University, Quebec City, Canada;Department of Electrical and Computer Engineering, Laval University, Quebec City, Canada

  • Venue:
  • IEEE Transactions on Very Large Scale Integration (VLSI) Systems
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Highly parallel decoders for convolutional turbo codes have been studied by proposing two parallel decoding architectures and a design approach of parallel interleavers. To solve the memory conflict problem of extrinsic information in a parallel decoder, a block-like approach in which data is written row-by-row and READ diagonal-wise is proposed for designing collision-free parallel interleavers. Furthermore, a warm-up-free parallel sliding window architecture is proposed for long turbo codes to maximize the decoding speeds of parallel decoders. The proposed architecture increases decoding speed by 6%-34% at a cost of a storage increase of 1% for an eight-parallel decoder. For short turbo codes (e.g., length of 512 bits), a warm-up-free parallel window architecture is proposed to double the speed at the cost of a hardware increase of 12%.