Spline Recurrent Neural Networks for Quad-Tree Video Coding
WIRN VIETRI 2002 Proceedings of the 13th Italian Workshop on Neural Nets-Revised Papers
Fast Blind Equalization Using Complex-Valued MLP
Neural Processing Letters
Performance Evaluation of GAP-RBF Network in Channel Equalization
Neural Processing Letters
Adaptive filtering with the self-organizing map: a performance comparison
Neural Networks - 2006 Special issue: Advances in self-organizing maps--WSOM'05
Symbol decision equalizer using a radial basis functions neural network
NN'06 Proceedings of the 7th WSEAS International Conference on Neural Networks
Digital communication receivers using gaussian processes for machine learning
EURASIP Journal on Advances in Signal Processing
Hopfield neural networks for on-line parameter estimation
Neural Networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Channel equalization using neural networks: a review
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
ICIC'05 Proceedings of the 2005 international conference on Advances in Intelligent Computing - Volume Part II
Hi-index | 35.68 |
Neural networks (NNs) have been extensively applied to many signal processing problems. In particular, due to their capacity to form complex decision regions, NNs have been successfully used in adaptive equalization of digital communication channels. The mean square error (MSE) criterion, which is usually adopted in neural learning, is not directly related to the minimization of the classification error, i.e., bit error rate (BER), which is of interest in channel equalization. Moreover, common gradient-based learning techniques are often characterized by slow speed of convergence and numerical ill conditioning. In this paper, we introduce a novel approach to learning in recurrent neural networks (RNNs) that exploits the principle of discriminative learning, minimizing an error functional that is a direct measure of the classification error. The proposed method extends to RNNs a technique applied with success to fast learning of feedforward NNs and is based on the descent of the error functional in the space of the linear combinations of the neurons (the neuron space); its main features are higher speed of convergence and better numerical conditioning w.r.t. gradient-based approaches, whereas numerical stability is assured by the use of robust least squares solvers. Experiments regarding the equalization of PAM signals in different transmission channels are described, which demonstrate the effectiveness of the proposed approach