Leveraging coherent space-time codes for noncoherent communication via training

  • Authors:
  • P. Dayal;M. Brehler;M. K. Varanasi

  • Affiliations:
  • Dept. of Electr. & Comput. Eng., Univ. of Colorado, Boulder, CO, USA;-;-

  • Venue:
  • IEEE Transactions on Information Theory
  • Year:
  • 2006

Quantified Score

Hi-index 754.90

Visualization

Abstract

Training codes are introduced for the multiple-antenna, noncoherent, multiple block-Rayleigh-fading channel in which the fading coefficients, which are constant over a fixed number of dimensions (coherence interval) for each block and then change independently to a new realization, are known neither at the transmitter nor at the receiver. Each codeword of a training code consists of a part known to the receiver-used to form a minimum mean-squared error (MMSE) estimate of the channel-and a part that contains codeword(s) of a space-time block or trellis code designed for the coherent channel (in which the receiver has perfect knowledge of the channel). The channel estimate is used as if it were error-free for decoding the information-bearing part of the training codeword. Training codes are hence easily designed to have high rate and low decoding complexity by choosing the underlying coherent code to have high rate and to be efficiently decodable. Conditions for which the estimator-detector (E-D) receiver is equivalent to the optimal noncoherent receiver are established. A key performance analysis result of this paper is that the training codes when decoded with the E-D receiver achieve a diversity order of the error probability that is equal to the diversity order of the underlying coherent code. In some cases, the performance of training codes can be measured relative to coherent reception via "training efficiency," which is then optimized over the energy allocation between the training and data phases. In the limit of increasing block lengths, training codes always achieve the performance of coherent reception. The examples of training codes provided in this work have polynomial complexity in rate but an error rate comparable to the best performing unitary designs available, even though the latter require exponential decoding complexity.