On interleaved, differentially encoded convolutional codes

  • Authors:
  • M. Peleg;I. Sason;S. Shamai;A. Elia

  • Affiliations:
  • Dept. of Electr. Eng., Technion-Israel Inst. of Technol., Haifa;-;-;-

  • Venue:
  • IEEE Transactions on Information Theory
  • Year:
  • 1999

Quantified Score

Hi-index 754.90

Visualization

Abstract

We study a serially interleaved concatenated code construction, where the outer code is a standard convolutional code, and the inner code is a recursive convolutional code of rate 1. We focus on the ubiquitous inner differential encoder (used, in particular, to resolve phase ambiguities), double differential encoder (used to resolve both phase and frequency ambiguities), and another rate 1 recursive convolutional code of memory 2. We substantiate analytically the rather surprising result, that the error probabilities corresponding to a maximum-likelihood (ML) coherently detected antipodal modulation over the additive white Gaussian noise (AWGN) channel for this construction are advantageous as compared to the stand-alone outer convolutional code. This is in spite of the fact that the inner code is of rate 1. The analysis is based on the tangential sphere upper bound of an ML decoder, incorporating the ensemble weight distribution (WD) of the concatenated code, where the ensemble is generated by all random and uniform interleavers. This surprising result is attributed to the WD thinning observed for the concatenated scheme which shapes the WD of the outer convolutional code to resemble more closely the binomial distribution (typical of a fully random code of the same length and rate). This gain is maintained regardless of a rather dramatic decrease, as demonstrated here, in the minimum distance of the concatenated scheme as compared to the minimum distance of the outer stand-alone convolutional code. The advantage of the examined serially interleaved concatenated code, given in terms of bit and/or block error probability which is decoded by a practical suboptimal decoder, over the optimally decoded standard convolutional code is demonstrated by simulations, and some insights into the performance of the iterative decoding algorithm are also discussed. Though we have investigated only specific constructions of constituent inner (rate 1) and outer codes, we trust, hinging on the rational of the arguments here, that these results extend to many other constituent convolutional outer codes and rate 1 inner recursive convolutional codes