An Improved Learning Algorithm Based on The Broyden-Fletcher-Goldfarb-Shanno (BFGS) Method For Back Propagation Neural Networks

  • Authors:
  • Nazri Mohd Nawi;Meghana R. Ransing;Rajesh S. Ransing

  • Affiliations:
  • University of Wales Swansea, United Kingdom;University of Wales Swansea, United Kingdom;University of Wales Swansea, United Kingdom

  • Venue:
  • ISDA '06 Proceedings of the Sixth International Conference on Intelligent Systems Design and Applications - Volume 01
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Broyden-Fletcher-Goldfarb-Shanno (BFGS) optimization algorithm usually used for nonlinear least squares is presented and is combined with the modified back propagation algorithm yielding a new fast training multilayer perceptron (MLP) algorithm (BFGS/AG). The approaches presented in the paper consist of three steps: (1) Modification on standard back propagation algorithm by introducing "gain variation" term of the activation function, (2) Calculating the gradient descent on error with respect to the weights and gains values and (3) the determination of the new search direction by exploiting the information calculated by gradient descent in step (2) as well as the previous search direction. The new approach improved the training efficiency of back propagation algorithm by adaptively modifying the initial search direction. Performance of the proposed method is demonstrated by comparing to the Broyden-Fletcher-Goldfarb-Shanno algorithm from neural network toolbox for the chosen benchmark. The results show that the number of iterations required by this algorithm to converge is less than 15% of what is required by the standard BFGS and neural network toolbox algorithm. It considerably improves the convergence rate significantly faster because of it new efficient search direction.