Training neural networks with additive noise in the desired signal

  • Authors:
  • Chuan Wang;J. C. Principe

  • Affiliations:
  • AT&T Bell Labs., Murray Hill, NJ;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

A global optimization strategy for training adaptive systems such as neural networks and adaptive filters (finite or infinite impulse response) is proposed. Instead of adding random noise to the weights as proposed in the past, additive random noise is injected directly into the desired signal. Experimental results show that this procedure also speeds up greatly the backpropagation algorithm. The method is very easy to implement in practice, preserving the backpropagation algorithm and requiring a single random generator with a monotonically decreasing step size per output channel. Hence, this is an ideal strategy to speed up supervised learning, and avoid local minima entrapment when the noise variance is appropriately scheduled