Accelerating the training of feedforward neural networks using generalized Hebbian rules for initializing the internal representations

  • Authors:
  • N. B. Karayiannis

  • Affiliations:
  • Dept. of Electr. & Comput. Eng., Houston Univ., TX

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents an unsupervised learning scheme for initializing the internal representations of feedforward neural networks, which accelerates the convergence of supervised learning algorithms. It is proposed in this paper that the initial set of internal representations can be formed through a bottom-up unsupervised learning process applied before the top-down supervised training algorithm. The synaptic weights that connect the input of the network with the hidden units can be determined through linear or nonlinear variations of a generalized Hebbian learning rule, known as Oja's rule. Various generalized Hebbian rules were experimentally tested and evaluated in terms of their effect on the convergence of the supervised training process. Several experiments indicated that the use of the proposed initialization of the internal representations significantly improves the convergence of gradient-descent-based algorithms used to perform nontrivial training tasks. The improvement of the convergence becomes significant as the size and complexity of the training task increase