Convergence of learning algorithms with constant learning rates

  • Authors:
  • C. -M. Kuan;K. Hornik

  • Affiliations:
  • Dept. of Econ., Illinois Univ., Urbana, IL;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1991

Quantified Score

Hi-index 0.00

Visualization

Abstract

The behavior of neural network learning algorithms with a small, constant learning rate, ε, in stationary, random input environments is investigated. It is rigorously established that the sequence of weight estimates can be approximated by a certain ordinary differential equation, in the sense of weak convergence of random processes as ε tends to zero. As applications, backpropagation in feedforward architectures and some feature extraction algorithms are studied in more detail