Area-efficient high-speed decoding schemes for turbo decoders
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
IEEE Journal on Selected Areas in Communications
WTS'09 Proceedings of the 2009 conference on Wireless Telecommunications Symposium
Reconfigurable turbo decoder with parallel architecture for 3GPP LTE system
IEEE Transactions on Circuits and Systems II: Express Briefs
Parallel interleavers through optimized memory address remapping
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
A Flexible LDPC/Turbo Decoder Architecture
Journal of Signal Processing Systems
A low-complexity turbo decoder architecture for energy-efficient wireless sensor networks
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Hi-index | 0.00 |
Highly parallel decoders for convolutional turbo codes have been studied by proposing two parallel decoding architectures and a design approach of parallel interleavers. To solve the memory conflict problem of extrinsic information in a parallel decoder, a block-like approach in which data is written row-by-row and READ diagonal-wise is proposed for designing collision-free parallel interleavers. Furthermore, a warm-up-free parallel sliding window architecture is proposed for long turbo codes to maximize the decoding speeds of parallel decoders. The proposed architecture increases decoding speed by 6%-34% at a cost of a storage increase of 1% for an eight-parallel decoder. For short turbo codes (e.g., length of 512 bits), a warm-up-free parallel window architecture is proposed to double the speed at the cost of a hardware increase of 12%.