What Inductive Bias Gives Good Neural Network Training Performance?

  • Authors:
  • C. W. Omlin

  • Affiliations:
  • -

  • Venue:
  • IJCNN '00 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN'00)-Volume 3 - Volume 3
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

There has been an increased interest in the use of prior knowledge for training neural networks. Prior knowledge in the form of Horn clauses has been the predominant paradigm for knowledge-based neural networks. Given a set of training examples and an initial domain theory, a neural network is constructed that fits the training examples by preprogramming some of the weights. The initialized neural network is then trained using backpropagation to refine the knowledge. This paper proposes a heuristic for determining the strength of the inductive bias by making use of gradient information in weight space in the direction of the programmed weights. The network starts its search in weight space where the gradient is maximal thus speeding-up convergence. Tests on a benchmark problem from molecular biology demonstrate that our heuristic, on average, reduces the training time by 60% compared to a random choice of the strength of the inductive bias; this performance is within 20% of the training time that can be achieved with the optimal inductive bias. The difference in generalization performance is not statistically significant.