A State Space Compression Method Based on Multivariate Analysis for Reinforcement Learning in High-Dimensional Continuous State Spaces

  • Authors:
  • Hideki Satoh

  • Affiliations:
  • The author is with the Future University-Hakodate, Hakodate-shi, 041-8655 Japan. E-mail: jamisato@m.ieice.org

  • Venue:
  • IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

A state space compression method based on multivariate analysis was developed and applied to reinforcement learning for high-dimensional continuous state spaces. First, useful components in the state variables of an environment are extracted and meaningless ones are removed by using multiple regression analysis. Next, the state space of the environment is compressed by using principal component analysis so that only a few principal components can express the dynamics of the environment. Then, a basis of a feature space for function approximation is constructed based on orthonormal bases of the important principal components. A feature space is thus autonomously construct without preliminary knowledge of the environment, and the environment is effectively expressed in the feature space. An example synchronization problem for multiple logistic maps was solved using this method, demonstrating that it solves the curse of dimensionality and exhibits high performance without suffering from disturbance states.