Using localizing learning to improve supervised learning algorithms

  • Authors:
  • S. Weaver;L. Baird;M. Polycarpou

  • Affiliations:
  • Genomatix USA, Cincinnati, OH;-;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Slow learning of neural-network function approximators can frequently be attributed to interference, which occurs when learning in one area of the input space causes unlearning in another area. To mitigate the effect of unlearning, this paper develops an algorithm that adjusts the weights of an arbitrary, nonlinearly parameterized network such that the potential for future interference during learning is reduced. This is accomplished by the reduction of a biobjective cost function that combines the approximation error and a term that measures interference. An analysis of the algorithm's convergence properties shows that learning with this algorithm reduces future unlearning. The algorithm can be used either during online learning or can be used to condition a network to have immunity from interference during a future learning stage. A simple example demonstrates how interference manifests itself in a network and how less interference can lead to more efficient learning. Simulations demonstrate how this new learning algorithm speeds up the training in various situations due to the extra cost function term