SVD Reduction in Continuos Environment Reinforcement Learning

  • Authors:
  • Szilveszter Kovács

  • Affiliations:
  • -

  • Venue:
  • Proceedings of the International Conference, 7th Fuzzy Days on Computational Intelligence, Theory and Applications
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Reinforcement learning methods, surviving the control difficulties of the unknown environment, are gaining more and more popularity recently in the autonomous robotics community. One of the possible difficulties of the reinforcement learning applications in complex situations is the huge size of the state-value- or action-value-function representation [2]. The case of continuous environment (continuous valued) reinforcement learning could be even complicated, as the state-value- or action-value-functions are turning into continuous functions. In this paper we suggest a way for tackling these difficulties by the application of SVD (Singular Value Decomposition) methods [3], [4], [15], [26].