Learning automata: an introduction
Learning automata: an introduction
Learning automata: theory and applications
Learning automata: theory and applications
Robot shaping: developing autonomous agents through learning
Artificial Intelligence
Operant conditioning in skinnerbots
Adaptive Behavior - Special issue on environment structure and behavior
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Networks of Learning Automata: Techniques for Online Stochastic Optimization
Networks of Learning Automata: Techniques for Online Stochastic Optimization
Varieties of learning automata: an overview
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Multiple stochastic learning automata for vehicle path control in an automated highway system
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Hi-index | 0.00 |
This article considers the problem of learning the correct temporal sequence of discrete behaviors from a finite behavior set that will lead to completion of a complex task, using only stochastic reinforcement from the environment. A trial-and-error learning algorithm is proposed that is inspired by backward chaining from the animal training discipline. The procedure is analytically formulated using a serial composition of finite action-set learning automata with delay. Simulation of the proposed algorithm shows that the algorithm does indeed lead to sequence learning. The effect of parametric variation in the magnitude and quality of reinforcement is investigated in both theory and simulation. It is shown that a fundamental tradeoff exists between quality and speed of learning. It is also shown that the algorithm has the ability to learn desirable action sequences among several feasible action sequences through the use of relative rewards, which may be interpreted using the Bellman principle of optimality.