Information Theory and Reliable Communication
Information Theory and Reliable Communication
Error Control Coding, Second Edition
Error Control Coding, Second Edition
Codes on graphs: normal realizations
IEEE Transactions on Information Theory
Channel combining and splitting for cutoff rate improvement
IEEE Transactions on Information Theory
A class of transformations that polarize binary-input memoryless channels
ISIT'09 Proceedings of the 2009 IEEE international conference on Symposium on Information Theory - Volume 3
Polar codes: characterization of exponent, bounds, and constructions
ISIT'09 Proceedings of the 2009 IEEE international conference on Symposium on Information Theory - Volume 3
The compound capacity of polar codes
Allerton'09 Proceedings of the 47th annual Allerton conference on Communication, control, and computing
Polar codes are optimal for lossy source coding
IEEE Transactions on Information Theory
Nested polar codes for wiretap and relay channels
IEEE Communications Letters
Polar codes: characterization of exponent, bounds, and constructions
IEEE Transactions on Information Theory
Rate-adaptive polar codes for distributed source coding
Proceedings of the 6th International Conference on Ubiquitous Information Management and Communication
Hardware Implementation of Successive-Cancellation Decoders for Polar Codes
Journal of Signal Processing Systems
Achievable complexity-performance tradeoffs in lossy compression
Problems of Information Transmission
Quantum Information Processing
The exponent of a polarizing matrix constructed from the Kronecker product
Designs, Codes and Cryptography
Hi-index | 754.96 |
A method is proposed, called channel polarization, to construct code sequences that achieve the symmetric capacity I(W) of any given binary-input discrete memoryless channel (B-DMC) W. The symmetric capacity is the highest rate achievable subject to using the input letters of the channel with equal probability. Channel polarization refers to the fact that it is possible to synthesize, out of N independent copies of a given B-DMC W, a second set of N binary-input channels {WN(i) : 1 ≤ i ≤ N} such that, as N becomes large, the fraction of indices i for which I(WN(i)) is near 1 approaches I(W) and the fraction for which I(WN(i)) is near 0 approaches 1 -I(W). The polarized channels {WN(i)} are well-conditioned for channel coding: one need only send data at rate 1 through those with capacity near 1 and at rate 0 through the remaining. Codes constructed on the basis of this idea are called polar codes. The paper proves that, given any B-DMC W with I(W) 0 and any target rate R I(W), there exists a sequence of polar codes {Cn;n ≥ 1} such that Cn has blocklength N = 2n, rate ≥ R, and probability of block error under successive cancellation decoding bounded as Pe(N,R) ≤ O(N-1/4) independently of the code rate. This performance is achievable by encoders and decoders with complexity O(N log N) for each.