Adaptive self-scaling non-monotone BFGS training algorithm for recurrent neural networks
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Acceleration of multilayer perceptron training with CUDA
Optical Memory and Neural Networks
Global Artificial Bee Colony-Levenberq-Marquardt GABC-LM Algorithm for Classification
International Journal of Applied Evolutionary Computation
Hi-index | 0.00 |
The Broyden-Fletcher-Goldfarb-Shanno (BFGS) optimization algorithm usually used for nonlinear least squares is presented and is combined with the modified back propagation algorithm yielding a new fast training multilayer perceptron (MLP) algorithm (BFGS/AG). The approaches presented in the paper consist of three steps: (1) Modification on standard back propagation algorithm by introducing "gain variation" term of the activation function, (2) Calculating the gradient descent on error with respect to the weights and gains values and (3) the determination of the new search direction by exploiting the information calculated by gradient descent in step (2) as well as the previous search direction. The new approach improved the training efficiency of back propagation algorithm by adaptively modifying the initial search direction. Performance of the proposed method is demonstrated by comparing to the Broyden-Fletcher-Goldfarb-Shanno algorithm from neural network toolbox for the chosen benchmark. The results show that the number of iterations required by this algorithm to converge is less than 15% of what is required by the standard BFGS and neural network toolbox algorithm. It considerably improves the convergence rate significantly faster because of it new efficient search direction.