Interconnection framework for high-throughput, flexible LDPC decoders
Proceedings of the conference on Design, automation and test in Europe: Designers' forum
A parallel pruned bit-reversal interleaver
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
A 150Mbit/s 3GPP LTE turbo code decoder
Proceedings of the Conference on Design, Automation and Test in Europe
Parallel interleavers through optimized memory address remapping
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
A design approach dedicated to network-based and conflict-free parallel interleavers
Proceedings of the great lakes symposium on VLSI
Flexible LDPC decoder architectures
VLSI Design - Special issue on Flexible Radio Design: Trends and Challenges in Digital Baseband Implementation
Modified Collision-Free Interleavers for High Speed Turbo Decoding
Wireless Personal Communications: An International Journal
Proceedings of the 23rd ACM international conference on Great lakes symposium on VLSI
Microprocessors & Microsystems
Hi-index | 754.84 |
For high-data-rate applications, the implementation of iterative turbo-like decoders requires the use of parallel architectures posing some collision-free constraints to the reading/writing process from/into the memory. This consideration applies to the two main classes of turbo-like codes, i.e., turbo codes and low-density parity-check (LDPC) codes. Contrary to the literature belief, we prove in this paper that there is no need for an ad hoc code design to meet the parallelism requirement, because, for any code and any choice of the scheduling of the reading/writing operations, there is a suitable mapping of the variables in the memory that grants a collision-free access. The proof is constructive, i.e., it gives an algorithm that obtains the desired collision-free mapping. The algorithm is applied to two simple examples, one for turbo codes and one for LDPC codes, to illustrate how the algorithm works.