Similarities of error regularization, sigmoid gain scaling, target smoothing, and training with jitter

  • Authors:
  • R. Reed;R. J. Marks, II;S. Oh

  • Affiliations:
  • Dept. of Electr. Eng., Washington Univ., Seattle, WA;-;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

The generalization performance of feedforward layered perceptrons can, in many cases, be improved either by smoothing the target via convolution, regularizing the training error with a smoothing constraint, decreasing the gain (i.e., slope) of the sigmoid nonlinearities, or adding noise (i.e., jitter) to the input training data, In certain important cases, the results of these procedures yield highly similar results although at different costs. Training with jitter, for example, requires significantly more computation than sigmoid scaling