A Note on Learning Automata Based Schemes for Adaptation of BP Parameters

  • Authors:
  • Mohammad Reza Meybodi;Hamid Beigy

  • Affiliations:
  • -;-

  • Venue:
  • IDEAL '00 Proceedings of the Second International Conference on Intelligent Data Engineering and Automated Learning, Data Mining, Financial Engineering, and Intelligent Agents
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

Backpropagation is often used as the learning algorithm in layered-structure neural networks, because of its efficiency. However, backpropagation is not free from problems. The learning process sometimes gets trapped in a local minimum and the network cannot produce the required response. In addition, The algorithm has number of parameters such learning rate (µ), momentum factor (α) and steepness parameter (λ). whose values are not known in advance, and must be determined by trail and error. The appropriate selection of these parameters have large effect on the convergence of the algorithm. Many techniques that adaptively adjust these parameters have been developed to increase speed of convergence. A class of algorithms which are developed recently uses learning automata (LA) for adjusting the parameters µ, α, and λ based on the observation of random response of the neural networks. One of the important aspects of learning automata based schemes is its remarkable effectiveness as a solution for increasing the speed of convergence. Another important aspect of learning automata based schemes which has not been pointed out earlier is its ability to escape from local minima with high possibility during the training period. In this report we study the ability of LA based schemes in escaping from local minma when standard BP fails to find the global minima. It is demonstrated through simulation that LA based schemes comparing to other schemes such as SAB, Super SAB, Fuzzy BP, ASBP method, and VLR method have higher ability in escaping from local minima.