Exploring and comparing the best “direct methods” for the efficient training of MLP-networks

  • Authors:
  • M. Di Martino;S. Fanelli;M. Protasi

  • Affiliations:
  • Dipartimento di Matematica, Rome Univ.;-;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

It is well known that the main difficulties of the algorithms based on backpropagation are the susceptibility to local minima and the slow adaptivity to the patterns during the training. In this paper, we present a class of algorithms, which overcome the above difficulties by utilizing some “direct” numerical methods for the computation of the matrices of weights. In particular, we investigate the performances of the FBFBK-LSB (least-squares backpropagation) algorithms and iterative conjugate gradient singular-value decomposition (ICGSVD), respectively, introduced by Barmann and Biegler-Konig (1993) and by the authors. Numerical results on several benchmark problems show a major reliability and/or efficiency of our algorithm ICGSVD