Globally convergent algorithms with local learning rates

  • Authors:
  • G. D. Magoulas;V. P. Plagianakos;M. N. Vrahatis

  • Affiliations:
  • Dept. of Inf. Syst. & Comput., Brunel Univ., London;-;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

A novel generalized theoretical result is presented that underpins the development of globally convergent first-order batch training algorithms which employ local learning rates. This result allows us to equip algorithms of this class with a strategy for adapting the overall direction of search to a descent one. In this way, a decrease of the batch-error measure at each training iteration is ensured, and convergence of the sequence of weight iterates to a local minimizer of the batch error function is obtained from remote initial weights. The effectiveness of the theoretical result is illustrated in three application examples by comparing two well-known training algorithms with local learning rates to their globally convergent modifications