Practical numerical algorithms for chaotic systems
Practical numerical algorithms for chaotic systems
Representing and using organizational knowledge in DAI systems
Distributed Artificial Intelligence (Vol. 2)
Technical Note: \cal Q-Learning
Machine Learning
Reinforcement Learning
Efficient Exploration In Reinforcement Learning
Efficient Exploration In Reinforcement Learning
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Reinforcement learning by chaotic exploration generator in target capturing task
KES'05 Proceedings of the 9th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part I
Hi-index | 0.00 |
A process of trial and error plays an important role in not only the human learning but also the machine learning. Such a process is called exploration in the reinforcement learning which has originated from experimental studies on learning in psychology. A uniform pseudorandom number generator appears to be suitable for exploration. However, it is known that a chaotic source also provides a random-like sequence as like as a stochastic source. By applying this random-like feature of a deterministic chaotic generator for exploration in a nonstationary shortcut maze problem, we have observed that a deterministic chaotic generator provides a better performance than a stochastic random exploration generator when used for exploration based on a logistic map. In this study, in order to confirm this difference in the performances of the two generators, we examine another nonstationary task - target capturing. The simulation result of this task agrees with the result of our previous study. From the view of multi-agent system, it is an inhomogeneous or heterogeneous system composed of some kinds of agents in many cases. In such situations, the exploration of them is not uniform. Chaotic exploration may suit well this heterogeneity in such a multi-agent system.