The layer-wise method and the backpropagation hybrid approach to learning a feedforward neural network

  • Authors:
  • N. S. Rubanov

  • Affiliations:
  • Dept. of Radiophys., Byelorussian State Univ., Minsk

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

Feedforward neural networks (FNNs) have been proposed to solve complex problems in pattern recognition and classification and function approximation. Despite the general success of learning methods for FNNs, such as the backpropagation (BP) algorithm, second-order optimization algorithms and layer-wise learning algorithms, several drawbacks remain to be overcome. In particular, two major drawbacks are convergence to a local minima and long learning time. We propose an efficient learning method for a FNN that combines the BP strategy and optimization layer by layer. More precisely, we construct the layer-wise optimization method using the Taylor series expansion of nonlinear operators describing a FNN and propose to update weights of each layer by the BP-based Kaczmarz iterative procedure. The experimental results show that the new learning algorithm is stable, it reduces the learning time and demonstrates improvement of generalization results in comparison with other well-known methods