Improved computation for Levenberg-Marquardt training

  • Authors:
  • Bogdan M. Wilamowski;Hao Yu

  • Affiliations:
  • Department of Electrical and Computer Engineering, Auburn University, Auburn, AL;Department of Electrical and Computer Engineering, Auburn University, Auburn, AL

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The improved computation presented in this paper is aimed to optimize the neural networks learning process using Levenberg-Marquardt (LM) algorithm. Quasi-Hessian matrix and gradient vector are computed directly, without Jacobian matrix multiplication and storage. The memory limitation problem for LM training is solved. Considering the symmetry of quasi-Hessian matrix, only elements in its upper/lower triangular array need to be calculated. Therefore, training speed is improved significantly, not only because of the smaller array stored in memory, but also the reduced operations in quasi-Hessian matrix calculation. The improved memory and time efficiencies are especially true for large sized patterns training.