Technical Note: \cal Q-Learning
Machine Learning
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Adaptive Resolution Model-Free Reinforcement Learning: Decision Boundary Partitioning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Q-Learning with Adaptive State Segmentation (QLASS)
CIRA '97 Proceedings of the 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation
Variable resolution discretization for high-accuracy solutions of optimal control problems
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Adaptive state space partitioning for reinforcement learning
Engineering Applications of Artificial Intelligence
General fuzzy min-max neural network for clustering and classification
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In this paper, a tabular reinforcement learning (RL) method is proposed based on improved fuzzy min-max (FMM) neural network. The method is named FMM-RL. The FMM neural network is used to segment the state space of the RL problem. The aim is to solve the "curse of dimensionality" problem of RL. Furthermore, the speed of convergence is improved evidently. Regions of state space serve as the hyperboxes of FMM. The minimal and maximal points of the hyperbox are used to define the state space partition boundaries. During the training of FMM neural network, the state space is partitioned via operations on hyperbox. Therefore, a favorable generalization performance of state space can be obtained. Finally, the method of this paper is applied to learn behaviors for the reactive robot. The experiment shows that the algorithm can effectively solve the problem of navigation in a complicated unknown environment.