State Space Partition for Reinforcement Learning Based on Fuzzy Min-Max Neural Network

  • Authors:
  • Yong Duan;Baoxia Cui;Xinhe Xu

  • Affiliations:
  • School of Information Science & Engineering, Shenyang University of Technology, Shenyang, 110023, China;School of Information Science & Engineering, Shenyang University of Technology, Shenyang, 110023, China;Institute of AI and Robotics, Northeastern University, Shenyang, 110004, China

  • Venue:
  • ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Part II--Advances in Neural Networks
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, a tabular reinforcement learning (RL) method is proposed based on improved fuzzy min-max (FMM) neural network. The method is named FMM-RL. The FMM neural network is used to segment the state space of the RL problem. The aim is to solve the "curse of dimensionality" problem of RL. Furthermore, the speed of convergence is improved evidently. Regions of state space serve as the hyperboxes of FMM. The minimal and maximal points of the hyperbox are used to define the state space partition boundaries. During the training of FMM neural network, the state space is partitioned via operations on hyperbox. Therefore, a favorable generalization performance of state space can be obtained. Finally, the method of this paper is applied to learn behaviors for the reactive robot. The experiment shows that the algorithm can effectively solve the problem of navigation in a complicated unknown environment.