Absorbing and ergodic discretized two-action learning automata
IEEE Transactions on Systems, Man and Cybernetics
Learning automata: an introduction
Learning automata: an introduction
Continuous Learning Automata Solutions to the Capacity Assignment Problem
IEEE Transactions on Computers
Learning Algorithms Theory and Applications
Learning Algorithms Theory and Applications
Adaptation of Parameters of BP Algorithm Using Learning Automata
SBRN '00 Proceedings of the VI Brazilian Symposium on Neural Networks (SBRN'00)
A layered approach to learning coordination knowledge in multiagent environments
Applied Intelligence
The Bayesian pursuit algorithm: a new family of estimator learning automata
IEA/AIE'11 Proceedings of the 24th international conference on Industrial engineering and other applications of applied intelligent systems conference on Modern approaches in applied intelligence - Volume Part II
Engineering Applications of Artificial Intelligence
Empirical verification of a strategy for unbounded resolution in finite player goore games
AI'06 Proceedings of the 19th Australian joint conference on Artificial Intelligence: advances in Artificial Intelligence
Service selection in stochastic environments: a learning-automaton based solution
Applied Intelligence
Finite time analysis of the pursuit algorithm for learning automata
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Continuous and discretized pursuit learning schemes: variousalgorithms and their comparison
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Discretized bayesian pursuit --- a new scheme for reinforcement learning
IEA/AIE'12 Proceedings of the 25th international conference on Industrial Engineering and Other Applications of Applied Intelligent Systems: advanced research in applied artificial intelligence
Hi-index | 0.00 |
There are currently two fundamental paradigms that have been used to enhance the convergence speed of Learning Automata (LA). The first involves the concept of utilizing the estimates of the reward probabilities, while the second involves discretizing the probability space in which the LA operates. This paper demonstrates how both of these can be simultaneously utilized, and in particular, by using the family of Bayesian estimates that have been proven to have distinct advantages over their maximum likelihood counterparts. The success of LA-based estimator algorithms over the classical, Linear Reward-Inaction (L RI )-like schemes, can be explained by their ability to pursue the actions with the highest reward probability estimates. Without access to reward probability estimates, it makes sense for schemes like the L RI to first make large exploring steps, and then to gradually turn exploration into exploitation by making progressively smaller learning steps. However, this behavior becomes counter-intuitive when pursuing actions based on their estimated reward probabilities. Learning should then ideally proceed in progressively larger steps, as the reward probability estimates turn more accurate. This paper introduces a new estimator algorithm, the Discretized Bayesian Pursuit Algorithm (DBPA), that achieves this by incorporating both the above paradigms. The DBPA is implemented by linearly discretizing the action probability space of the Bayesian Pursuit Algorithm (BPA) (Zhang et al. in IEA-AIE 2011, Springer, New York, pp. 608---620, 2011). The key innovation of this paper is that the linear discrete updating rules mitigate the counter-intuitive behavior of the corresponding linear continuous updating rules, by augmenting them with the reward probability estimates. Extensive experimental results show the superiority of DBPA over previous estimator algorithms. Indeed, the DBPA is probably the fastest reported LA to date. Apart from the rigorous experimental demonstration of the strength of the DBPA, the paper also briefly records the proofs of why the BPA and the DBPA are ∈-optimal in stationary environments.