A digital fountain approach to reliable distribution of bulk data
Proceedings of the ACM SIGCOMM '98 conference on Applications, technologies, architectures, and protocols for computer communication
FOCS '02 Proceedings of the 43rd Symposium on Foundations of Computer Science
Solving Large Sparse Linear Systems over Finite Fields
CRYPTO '90 Proceedings of the 10th Annual International Cryptology Conference on Advances in Cryptology
IEEE/ACM Transactions on Networking (TON) - Special issue on networking and information theory
Decentralized erasure codes for distributed networked storage
IEEE/ACM Transactions on Networking (TON) - Special issue on networking and information theory
Efficient erasure correcting codes
IEEE Transactions on Information Theory
Finite-length analysis of low-density parity-check codes on the binary erasure channel
IEEE Transactions on Information Theory
Capacity-achieving sequences for the erasure channel
IEEE Transactions on Information Theory
Efficient maximum-likelihood decoding of LDPC codes over the binary erasure channel
IEEE Transactions on Information Theory
Regular and irregular progressive edge-growth tanner graphs
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Performance versus overhead for fountain codes over Fq
IEEE Communications Letters
Hi-index | 0.00 |
This paper investigates efficient maximum-likelihood (ML) decoding algorithms for low-density parity-check (LDPC) codes over erasure channels. In particular, enhancements to a previously proposed structured Gaussian elimination approach are presented. The improvements are achieved by developing a set of algorithms, here referred to as pivoting algorithms, aiming to limit the average number of reference variables (or pivots) from which the erased symbols can be recovered. Four pivoting algorithms are compared, which exhibit different trade-offs between the complexity of the pivoting phase and the average number of pivots. Numerical results on the performance of LDPC codes under ML erasure decoding complete the analysis, confirming that a near-optimum performance can be obtained with an affordable decoding complexity, up to very high data rates. For example, for one of the presented algorithms, a software implementation has been developed, which is capable to provide data rates above 1.5 Gbps on a commercial computing platform.