Design and implementation of low-energy turbo decoders
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Parallel interleaver design and VLSI architecture for low-latency MAP turbo decoders
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
VLSI Architectural design tradeoffs for sliding-window Log-MAP decoders
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Memory sub-banking scheme for high throughput MAP-based SISO decoders
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Interleaved Trellis Coded Modulation and Decoder Optimizations for 10 Gigabit Ethernet over Copper
Journal of VLSI Signal Processing Systems
Traceback-Based Optimizations for Maximum a Posteriori Decoding Algorithms
Journal of Signal Processing Systems
Unified convolutional/turbo decoder design using tile-based timing analysis of VA/MAP kernel
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Low-power memory-reduced traceback MAP decoding for double-binary convolutional turbo decoder
IEEE Transactions on Circuits and Systems Part I: Regular Papers - Special issue on ISCAS2008
Area-efficient high-throughput MAP decoder architectures
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Highly-parallel decoding architectures for convolutional turbo codes
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
A Flexible LDPC/Turbo Decoder Architecture
Journal of Signal Processing Systems
Hi-index | 0.00 |
Turbo decoders inherently have large decoding latency and low throughput due to iterative decoding. To increase the throughput and reduce the latency, high-speed decoding schemes have to be employed. In this paper, following a discussion on basic parallel decoding architectures, the segmented sliding window approach and two other types of area-efficient parallel decoding schemes are proposed. Detailed comparison on storage requirement, number of computation units, and the overall decoding latency is provided for various decoding schemes with different levels of parallelism. Hybrid parallel decoding schemes are proposed as an attractive solution for very high level parallelism implementations. To reduce the storage bottleneck for each subdecoder, a modified version of the partial storage of state metrics approach is presented. The new approach achieves a better tradeoff between storage part and recomputation part in general. The application of the pipeline-interleaving technique to parallel turbo decoding architectures is also presented. Simulation results demonstrate that the proposed area-efficient parallel decoding schemes do not cause performance degradation.