Advances in Feedforward Neural Networks: Demystifying Knowledge Acquiring Black Boxes

  • Authors:
  • Carl G. Looney

  • Affiliations:
  • -

  • Venue:
  • IEEE Transactions on Knowledge and Data Engineering
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

We survey research of recent years on the supervised training of feedforward neural networks. The goal is to expose how the networks work, how to engineer them so they can learn data with less extraneous noise, how to train them efficiently, and how to assure that the training is valid. The scope covers gradient descent and polynomial line search, from backpropagation through conjugate gradients and quasi-Newton methods. There is a consensus among researchers that adaptive step gains (learning rates) can stabilize and accelerate convergence and that a good starting weight set improves both the training speed and the learning quality. The training problem includes both the design of a network function and the fitting of the function to a set of input and output data points by computing a set of coefficient weights. The form of the function can be adjusted by adjoining new neurons and pruning existing ones and setting other parameters such as biases and exponential rates. Our exposition reveals several useful results that are readily implementable.