A robust extended Elman backpropagation algorithm

  • Authors:
  • Qing Song;Yeng Chai Soh;Lei Zhao

  • Affiliations:
  • School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore;School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore;School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore

  • Venue:
  • IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Elman networks (ENs) can be viewed as a feed-forward (FF) neural network with an additional set of inputs from the context layer input (feedback from the hidden layer). Therefore, a standard on-line (real time) backpropagation (BP) algorithm, instead of the off-line backpropagation through time (BPTT) algorithm, can be applied for the training of ENs, which is usually called Elman backpropagation (EBP) for discrete time sequence prediction applications. However, the standard BP training algorithm is not the most suitable one for ENs. Using a small learning rate may help improve the training of ENs, but it can result in very slow convergence speed and poor generalization performance, while a large learning rate may lead to unstable training in terms of weight divergence. Therefore, an optimal trade-off between ENs training speed and weight convergence with good generalization capability is desired. In this paper, a robust extended Elman backpropagation (eEBP) training algorithm of ENs with a nonlinear adaptive dead zone scheme is developed based on a novel training concept. The optimized adaptive learning rate with the adaptive dead zone maximizes the training speed of the ENs for each weight updating step while generalization performance of the eEBP training is improved. Computer simulations are carried out to show the improved performance of eEBP for discrete-time sequence prediction.