Field Guide to Dynamical Recurrent Networks
Field Guide to Dynamical Recurrent Networks
Recurrent Neural Networks for Prediction: Learning Algorithms,Architectures and Stability
Recurrent Neural Networks for Prediction: Learning Algorithms,Architectures and Stability
The Crystallizing Substochastic Sequential Machine Extractor: CrySSMEx
Neural Computation
A learning algorithm for continually running fully recurrent neural networks
Neural Computation
A convergence result for learning in recurrent neural networks
Neural Computation
A class of adaptive step-size control algorithms for adaptivefilters
IEEE Transactions on Signal Processing
IEEE Transactions on Neural Networks
Robust backpropagation training algorithm for multilayered neural tracking controller
IEEE Transactions on Neural Networks
Stable dynamic backpropagation learning in recurrent neural networks
IEEE Transactions on Neural Networks
New results on recurrent network training: unifying the algorithms and accelerating convergence
IEEE Transactions on Neural Networks
Chaotifying linear Elman networks
IEEE Transactions on Neural Networks
Adaptive inverse control of linear and nonlinear systems using dynamic neural networks
IEEE Transactions on Neural Networks
Markovian architectural bias of recurrent neural networks
IEEE Transactions on Neural Networks
Robust Neural Network Tracking Controller Using Simultaneous Perturbation Stochastic Approximation
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
On the computational power of Elman-style recurrent networks
IEEE Transactions on Neural Networks
Gradient calculations for dynamic recurrent neural networks: a survey
IEEE Transactions on Neural Networks
A filter based feature selection approach using lempel ziv complexity
ISNN'11 Proceedings of the 8th international conference on Advances in neural networks - Volume Part II
Hi-index | 0.00 |
Elman networks (ENs) can be viewed as a feed-forward (FF) neural network with an additional set of inputs from the context layer input (feedback from the hidden layer). Therefore, a standard on-line (real time) backpropagation (BP) algorithm, instead of the off-line backpropagation through time (BPTT) algorithm, can be applied for the training of ENs, which is usually called Elman backpropagation (EBP) for discrete time sequence prediction applications. However, the standard BP training algorithm is not the most suitable one for ENs. Using a small learning rate may help improve the training of ENs, but it can result in very slow convergence speed and poor generalization performance, while a large learning rate may lead to unstable training in terms of weight divergence. Therefore, an optimal trade-off between ENs training speed and weight convergence with good generalization capability is desired. In this paper, a robust extended Elman backpropagation (eEBP) training algorithm of ENs with a nonlinear adaptive dead zone scheme is developed based on a novel training concept. The optimized adaptive learning rate with the adaptive dead zone maximizes the training speed of the ENs for each weight updating step while generalization performance of the eEBP training is improved. Computer simulations are carried out to show the improved performance of eEBP for discrete-time sequence prediction.