Multi-Layer Neural Networks with Improved Learning Algorithms

  • Authors:
  • Michael Negnevitsky

  • Affiliations:
  • University of Tasmania

  • Venue:
  • DICTA '05 Proceedings of the Digital Image Computing on Techniques and Applications
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The most popular training method for multi-layer feed-forward networks has traditionally been the error back-propagation algorithm. This algorithm has proved to be slow in its convergence to the error minimum, thus several methods of accelerating learning using back-propagation have been developed. These include using hyperbolic tangent activation functions, momentum, adaptive learning rates and fuzzy control of the learning parameters. These methods will be looked at in this paper.