Optimizing a new nonlinear reinforcement scheme with Breeder genetic algorithm

  • Authors:
  • Florin Stoica;Dana Simian

  • Affiliations:
  • Department of Informatics, "Lucian Blaga" University of Sibiu, Sibiu, Romania;Department of Informatics, "Lucian Blaga" University of Sibiu, Sibiu, Romania

  • Venue:
  • NN'10/EC'10/FS'10 Proceedings of the 11th WSEAS international conference on nural networks and 11th WSEAS international conference on evolutionary computing and 11th WSEAS international conference on Fuzzy systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Using Stochastic Learning Automata, we can build robust learning systems without the complete knowledge of their environments. A Stochastic Learning Automaton is a learning entity that learns the optimal action to use from its set of possible actions. The algorithm that guarantees the desired learning process is called a reinforcement scheme. A major advantage of reinforcement learning compared to other learning approaches is that it requires no information about the environment except for the reinforcement signal. The drawback is that a reinforcement learning system is slower than other approaches for most applications since every action needs to be tested a number of times for a good performance. In our approach, the learning process must be much faster than the environment changes, and for accomplish this we need efficient reinforcement schemes. The aim of this paper is to present a reinforcement scheme which satisfies all necessary and sufficient conditions for absolute expediency for a stationary environment. Our scheme provides better results, compared with other nonlinear reinforcement schemes. Furthermore, using a Breeder genetic algorithm, we are providing the optimal learning parameters for our scheme, in order to reach the best performance.