Constrained supervised learning
Journal of Mathematical Psychology
Field Guide to Dynamical Recurrent Networks
Field Guide to Dynamical Recurrent Networks
Recurrent Neural Networks for Prediction: Learning Algorithms,Architectures and Stability
Recurrent Neural Networks for Prediction: Learning Algorithms,Architectures and Stability
A Robust Information Clustering Algorithm
Neural Computation
A learning algorithm for continually running fully recurrent neural networks
Neural Computation
A convergence result for learning in recurrent neural networks
Neural Computation
Adaptive Filtering Prediction and Control
Adaptive Filtering Prediction and Control
A class of adaptive step-size control algorithms for adaptivefilters
IEEE Transactions on Signal Processing
IEEE Transactions on Neural Networks
Robust backpropagation training algorithm for multilayered neural tracking controller
IEEE Transactions on Neural Networks
Stable dynamic backpropagation learning in recurrent neural networks
IEEE Transactions on Neural Networks
New results on recurrent network training: unifying the algorithms and accelerating convergence
IEEE Transactions on Neural Networks
Chaotifying linear Elman networks
IEEE Transactions on Neural Networks
Adaptive inverse control of linear and nonlinear systems using dynamic neural networks
IEEE Transactions on Neural Networks
Hierarchical Singleton-Type Recurrent Neural Fuzzy Networks for Noisy Speech Recognition
IEEE Transactions on Neural Networks
Robust Neural Network Tracking Controller Using Simultaneous Perturbation Stochastic Approximation
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
On the computational power of Elman-style recurrent networks
IEEE Transactions on Neural Networks
Gradient calculations for dynamic recurrent neural networks: a survey
IEEE Transactions on Neural Networks
Quantized Neural Modeling: Hybrid Quantized Architecture in Elman Networks
Neural Processing Letters
An information theoretic sparse kernel algorithm for online learning
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
An Elman network (EN) can be viewed as a feedforward (FF) neural network with an additional set of inputs from the context layer (feedback from the hidden layer). Therefore, instead of the offline backpropagation-through-time (BPTT) algorithm, a standard online (real-time) backpropagation (BP) algorithm, usually called Elman BP (EBP), can be applied for EN training for discrete-time sequence predictions. However, the standard BP training algorithm is not the most suitable for ENs. A low learning rate can improve the training of ENs but can also result in very slow convergence speeds and poor generalization performance, whereas a high learning rate can lead to unstable training in terms of weight divergence. Therefore, an optimal or suboptimal tradeoff between training speed and weight convergence with good generalization capability is desired for ENs. This paper develops a robust extended EBP (eEBP) training algorithm for ENs with a new adaptive dead zone scheme based on eEBP training concepts. The adaptive learning rate and adaptive dead zone optimize the training of ENs for each individual output and improve the generalization performance of the eEBP training. In particular, for the proposed eEBP training algorithm, convergence of the ENs' weights with the adaptive dead zone estimates is proven in the sense of Lyapunov functions. Computer simulations are carried out to demonstrate the improved performance of eEBP for discrete-time sequence predictions.