Performance analysis of linear codes under maximum-likelihood decoding: a tutorial
Communications and Information Theory
On the entropy rate of hidden Markov processes observed through arbitrary memoryless channels
IEEE Transactions on Information Theory
Trapping set enumerators for repeat multiple accumulate code ensembles
ISIT'09 Proceedings of the 2009 IEEE international conference on Symposium on Information Theory - Volume 3
Spectra and minimum distances of repeat multiple-accumulate codes
IEEE Transactions on Information Theory
Hi-index | 0.12 |
What are the fundamental limits of communications channels and channel coding systems? In general, these limits manifest themselves as thresholds which separate what is possible from what is not. For example, the capacity of a communications channel is a coding rate threshold above which reliable communication is not possible. At any coding rate below capacity, however, reliable communication is possible. Likewise, all fixed rate coding schemes have channel noise thresholds above which the probability of decoding error cannot be made arbitrarily small. When the channel noise is below the threshold, many of the same coding systems can operate with very small error probability. In this dissertation, we consider the noise thresholds of Convolutional Accumulate- m (CA-m) codes, the capacity of finite state channels (FSCs), and the information rates achievable via joint iterative decoding of irregular low-density parity-check (LDPC) codes over channels with memory. CA-m codes are a class of turbo-like codes formed by serially concatenating a terminated convolutional code with a cascade of m interleaved rate-1 “accumulate” codes. The first two chapters consider these codes from two different perspectives. First, the sequence of m encoders is analyzed as a Markov chain to show that these codes converge to random codes, which are nearly optimal, as m goes to infinity. Next, a detailed threshold analysis is performed for both maximum likelihood and iterative decoding of long CA-m codes with finite m. A FSC is a discrete-time channel whose output depends on both the channel input and the channel state. A simple Monte Carlo method is introduced which estimates the achievable information rate of any FSC driven by finite memory Markov inputs. Until recently, there has been no practical method of estimating the capacity of a FSC. This Monte Carlo method enables one to compute a non-decreasing sequence of lower bounds on the capacity. The joint iterative decoding of irregular LDPC codes over channels with memory is also considered. For a class of erasure channels with memory, we derive a closed form recursion that can be used to verify necessary and sufficient conditions for successful decoding.