ANASA-a stochastic reinforcement algorithm for real-valued neural computation

  • Authors:
  • A. V. Vasilakos;N. H. Loukas

  • Affiliations:
  • Inst. of Comput. Sci., Found. for Res. & Technol.-Hellas, Heraklion;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper introduces ANASA (adaptive neural algorithm of stochastic activation), a new, efficient, reinforcement learning algorithm for training neural units and networks with continuous output. The proposed method employs concepts, found in self-organizing neural networks theory and in reinforcement estimator learning algorithms, to extract and exploit information relative to previous input pattern presentations. In addition, it uses an adaptive learning rate function and a self-adjusting stochastic activation to accelerate the learning process. A form of optimal performance of the ANASA algorithm is proved (under a set of assumptions) via strong convergence theorems and concepts. Experimentally, the new algorithm yields results, which are superior compared to existing associative reinforcement learning methods in terms of accuracy and convergence rates. The rapid convergence rate of ANASA is demonstrated in a simple learning task, when it is used as a single neural unit, and in mathematical function modeling problems, when it is used to train various multilayered neural networks