A Normalized Adaptive Training of Recurrent Neural Networks With Augmented Error Gradient

  • Authors:
  • Wu Yilei;Song Qing;Liu Sheng

  • Affiliations:
  • Nanyang Technol. Univ., Singapore;-;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

For training algorithms of recurrent neural networks (RNN), convergent speed and training error are always two contradictory performances. In this letter, we propose a normalized adaptive recurrent learning (NARL) to obtain a tradeoff between transient and steady-state response. An augmented term is added to error gradient to exactly model the derivative of cost function with respect to hidden layer weight. The influence of the induced gain of activation function on training stability is also taken into consideration. Moreover, adaptive learning rate is employed to improve the robustness of the gradient training. Finally, computer simulations of a model prediction problem are synthesized to give comparisons between NARL and conventional normalized real-time recurrent learning (N-RTRL).