No. 318 on SWAT 88: 1st Scandinavian workshop on algorithm theory
Learning automata: an introduction
Learning automata: an introduction
Parallel searching in the plane
Computational Geometry: Theory and Applications
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Automata learning and intelligent tertiary searching for stochasticpoint location
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Parameter learning from stochastic teachers and stochastic compulsive liars
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
A Solution to the Stochastic Point Location Problem in Metalevel Nonstationary Environments
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Adaptive step searching for solving stochastic point location problem
ICIC'13 Proceedings of the 9th international conference on Intelligent Computing Theories
Hi-index | 0.00 |
This paper deals with the Stochastic-Point Location (SPL) problem. It presents a solution which is novel in both philosophy and strategy to all the reported related learning algorithms. The SPL problem concerns the task of a Learning Mechanism attempting to locate a point on a line. The mechanism interacts with a random environment which essentially informs it, possibly erroneously, if the unknown parameter is on the left or the right of a given point which also is the current guess. The first pioneering work [6] on the SPL problem presented a solution which operates a one-dimensional controlled Random Walk (RW) in a discretized space to locate the unknown parameter. The primary drawback of the latter scheme is the fact that the steps made are always very conservative. If the step size is decreased the scheme yields a higher accuracy, but the convergence speed is correspondingly decreased. In this paper we introduce the Hierarchical Stochastic Searching on the Line (HSSL) solution. The HSSL solution is shown to provide orders of magnitude faster convergence when compared to the original SPL solution reported in [6]. The heart of the HSSL strategy involves performing a controlled RW on a discretized space, which unlike the traditional RWs, is not structured on the line per se, but rather on a binary tree described by intervals on the line. The overall learning scheme is shown to be optimal if the effectiveness of the environment, p, is greater than the golden ratio conjugate [4] --- which, in itself, is a very intriguing phenomenon. The solution has been both analytically analyzed and simulated, with extremely fascinating results. The strategy presented here can be utilized to determine the best parameter to be used in any optimization problem, and also in any application where the SPL can be applied [6].