Training Feedforward Neural Networks with Gain Constraints

  • Authors:
  • Eric Hartman

  • Affiliations:
  • Pavilion Technologies, 1110 Metric Blvd. #700, Austin, TX 78758-4018, U.S.A.

  • Venue:
  • Neural Computation
  • Year:
  • 2000

Quantified Score

Hi-index 0.01

Visualization

Abstract

Inaccurate input-output gains (partial derivatives of outputs with respect to inputs) are common in neural network models when input variables are correlated or when data are incomplete or inaccurate. Accurate gains are essential for optimization, control, and other purposes. We develop and explore a method for training feedforward neural networks subject to inequality or equality-bound constraints on the gains of the learned mapping. Gain constraints are implemented as penalty terms added to the objective function, and training is done using gradient descent. Adaptive and robust procedures are devised for balancing the relative strengths of the various terms in the objective function, which is essential when the constraints are inconsistent with the data. The approach has the virtue that the model domain of validity can be extended via extrapolation training, which can dramatically improve generalization. The algorithm is demonstrated here on artificial and real-world problems with very good results and has been advantageously applied to dozens of models currently in commercial use.