Low power architecture of the soft-output Viterbi algorithm
ISLPED '98 Proceedings of the 1998 international symposium on Low power electronics and design
Design of low-power high-speed maximum a priori decoder architectures
Proceedings of the conference on Design, automation and test in Europe
Memory optimization of MAP turbo decoder algorithms
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Energy efficient turbo decoding for 3G mobile
ISLPED '01 Proceedings of the 2001 international symposium on Low power electronics and design
Finite Wordlength Analysis and Adaptive Decoding for Turbo/MAP Decoders
Journal of VLSI Signal Processing Systems - Special issue on signal processing systems design and implementation
Architectural strategies for low-power VLSI turbo decoders
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Design and implementation of low-energy turbo decoders
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
VLSI Architectural design tradeoffs for sliding-window Log-MAP decoders
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Memory sub-banking scheme for high throughput MAP-based SISO decoders
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Area-efficient high-speed decoding schemes for turbo decoders
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Hi-index | 0.00 |
Maximum A Posteriori (MAP) decoding is a crucial enabler of turbo coding and other powerful feedback-based algorithms. To allow pervasive use of these techniques in resources constrained systems, it is important to limit their implementation complexity, without sacrificing the superior performance they are known for. We show that introducing traceback information into the MAP algorithm, thereby leveraging components that are also part of Soft-Output Viterbi Algorithms (SOVA), offers two unique possibilities to simplify the computational requirements. Our proposed enhancements are effective at each individual decoding iteration and therefore provide gains on top of existing techniques such as early termination and memory optimizations. Based on these enhancements, we will present three new architectural variants for the decoder. Each one of these may be preferable depending on the decoder memory hardware requirements and number of trellis states. Computational complexity is reduced significantly, without incurring significant performance penalty.