Original Contribution: On a class of efficient learning algorithms for neural networks

  • Authors:
  • Frank Bärmann;Friedrich Biegler-König

  • Affiliations:
  • -;-

  • Venue:
  • Neural Networks
  • Year:
  • 1992

Quantified Score

Hi-index 0.00

Visualization

Abstract

The ability of a neural network with one hidden layer to accurately learn a specified learning set increases with the number of nodes in the hidden layer; if a network has exactly the same number of internal nodes as the number of examples to be learnt, it is theoretically able to learn these examples exactly. If, however, the generalized delta rule (or back propagation) is used as the learning algorithm in numerical experiments, a network's learning aptitude generally aeclines with increasing number of internal nodes. The approach to iterate the solvability condition for accurate learning, instead of using total error minimization, results in learning algorithms in which learning aptitude increases with the number of internal nodes. At the same time, these methods enable further nodes to be added dynamically in a particularly simple manner. A numerical implementation showed that, if the solvability condition was valid, the algorithm was able to learn the learning set to the limits of computer accuracy in all cases tested, and thus, especially, did not get caught up in local minima of the error function. Furthermore, the convergence speed is considerably higher than that of back propagation.