On Weight-Noise-Injection Training

  • Authors:
  • Kevin Ho;Chi-Sing Leung;John Sum

  • Affiliations:
  • Department of Computer Science and Communication Engineering, Providence University, Sha-Lu, Taiwan;Department of Electronic Engineering, City University of Hong Kong, Kowloon Tong, KLN, Hong Kong;Institute of E-Commerce, National Chung Hsing University, Taiwan

  • Venue:
  • Advances in Neuro-Information Processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.01

Visualization

Abstract

While injecting weight noise during training has been proposed for more than a decade to improve the convergence, generalization and fault tolerance of a neural network, not much theoretical work has been done to its convergence proof and the objective function that it is minimizing. By applying the Gladyshev Theorem, it is shown that the convergence of injecting weight noise during training an RBF network is almost sure. Besides, the corresponding objective function is essentially the mean square errors (MSE). This objective function indicates that injecting weight noise during training an radial basis function (RBF) network is not able to improve fault tolerance. Despite this technique has been effectively applied to multilayer perceptron, further analysis on the expected update equation of training MLP with weight noise injection is presented. The performance difference between these two models by applying weight injection is discussed.