Original Contribution: Optimization of the hidden unit function in feedforward neural networks

  • Authors:
  • Osamu Fujita

  • Affiliations:
  • -

  • Venue:
  • Neural Networks
  • Year:
  • 1992

Quantified Score

Hi-index 0.00

Visualization

Abstract

A novel objective function is proposed for optimizing the hidden unit function in feedforward neural networks. This objective function represents the performance of the hidden unit at minimizing the least squared output errors of the linear output unit. This is derived from the decrease in the output errors due to the addition of the hidden units. The optimized output state vectors of the hidden units span a proper state space, which includes the desired output vectors for the network. The optimization (maximization of the objective function) is equal to minimizing the angle between the desired output vector and the projection of the hidden unit's output state vector onto the orthogonal complement of the subspace spanned by the other state vectors. The approximate solution can be obtained using the gradient ascent algorithm. This optimization method is useful in constructing fully connected feedforward neural networks and for minimizing the size of layered networks.