Neural Networks for Optimization and Signal Processing
Neural Networks for Optimization and Signal Processing
Novel Use of Channel Information in a Neural Convolutional Decoder
IJCNN '00 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN'00)-Volume 5 - Volume 5
Neural networks, error-correcting codes, and polynomials over the binary n-cube
IEEE Transactions on Information Theory
Analog decoding using a gradient-type neural network
IEEE Transactions on Neural Networks
A general rate K/N convolutional decoder based on neural networks with stopping criterion
Advances in Artificial Intelligence
Hi-index | 0.01 |
In this paper a detailed mathematical model of a 1/n rate conventional convolutional decoder system, based on neural networks (NNs) applications and the gradient descent algorithm, has been developed and analysed. The general expression for the noise energy function, needed for the recurrent neural networks (RNNs) decoding, is derived. Then, the expressions for the gradient descent updating rule are derived and the NN decoder was designed. Based on the developed theory, a simulator of the decoder was implemented. Simulation results have confirmed that the RNN decoder is capable of performing very close to the Viterbi decoder and works extremely well for some specially structured convolutional codes. In particular, decoding capabilities of RNN decoders are investigated in the case when simulated annealing (SA) technique has been used. It is also shown that there are certain codes that do not require SA and can achieve performance comparable to the Viterbi algorithm.