Regularizing the effect of input noise injection in feedforward neural networks training

  • Authors:
  • Abd-Krim Seghouane;Yassir Moudden;Gilles Fleury

  • Affiliations:
  • Plateau de Moulon, École Supérieure d’Électricité, Service des Mesures, 3 rue Joliot Curie, 91192, Gif sur Yvette Cedex, France;Plateau de Moulon, École Supérieure d’Électricité, Service des Mesures, 3 rue Joliot Curie, 91192, Gif sur Yvette Cedex, France;Plateau de Moulon, École Supérieure d’Électricité, Service des Mesures, 3 rue Joliot Curie, 91192, Gif sur Yvette Cedex, France

  • Venue:
  • Neural Computing and Applications
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Injecting input noise during feedforward neural network (NN) training can improve generalization performance markedly. Reported works justify this fact arguing that noise injection is equivalent to a smoothing regularization with the input noise variance playing the role of the regularization parameter. The success of this approach depends on the appropriate choice of the input noise variance. However, it is often not known a priori if the degree of smoothness imposed on the FNN mapping is consistent with the unknown function to be approximated. In order to have a better control over this smoothing effect, a loss function putting in balance the smoothed fitting induced by the noise injection and the precision of approximation, is proposed. The second term, which aims at penalizing the undesirable effect of input noise injection or controlling the deviation of the random perturbed loss, was obtained by expressing a certain distance between the original loss function and its random perturbed version. In fact, this term can be derived in general for parametrical models that satisfy the Lipschitz property. An example is included to illustrate the effectiveness of learning with this proposed loss function when noise injection is used.