System identification: theory for the user
System identification: theory for the user
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Nonlinear black-box modeling in system identification: a unified overview
Automatica (Journal of IFAC) - Special issue on trends in system identification
Adaptive filter theory (3rd ed.)
Adaptive filter theory (3rd ed.)
Natural gradient works efficiently in learning
Neural Computation
Applications of neural networks to digital communications: a survey
Signal Processing - Special issue on emerging techniques for communication terminals
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Principles of Digital Transmission: With Wireless Applications
Principles of Digital Transmission: With Wireless Applications
Semiparametric model and superefficiency in blind deconvolution
Signal Processing
IEEE Transactions on Signal Processing
Blind identification of LTI-ZMNL-LTI nonlinear channel models
IEEE Transactions on Signal Processing
Analysis of stochastic gradient tracking of time-varying polynomialWiener systems
IEEE Transactions on Signal Processing
Identification of a class of nonlinear systems under stationarynon-Gaussian excitation
IEEE Transactions on Signal Processing
Statistical analysis of neural network modeling and identificationof nonlinear systems with memory
IEEE Transactions on Signal Processing
IEEE Transactions on Signal Processing
Quality modeling of chemical product based on a new chaotic Elman neural network
ICNC'09 Proceedings of the 5th international conference on Natural computation
Hi-index | 0.00 |
We use natural gradient (NG) learning neural networks (NNs) for modeling and identifying nonlinear systems with memory. The nonlinear system is comprised of a discrete-time linear filter H followed by a zero-memory nonlinearity g(ċ). The NN model is composed of a linear adaptive filter Q followed by a two-layer memoryless nonlinear NN. A Kalman filter-based technique and a search-and-converge method have been employed for the NG algorithm. It is shown that the NG descent learning significantly outperforms the ordinary gradient descent and the Levenberg-Marquardt (LM) procedure in terms of convergence speed and mean squared error (MSE) performance.