Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Dynamic Programming
Size reduction by interpolation in fuzzy rule bases
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Fuzzy approximation via grid point sampling and singular value decomposition
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Simplifying fuzzy rule-based models using orthogonal transformationmethods
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Reduction of fuzzy rule base via singular value decomposition
IEEE Transactions on Fuzzy Systems
Hi-index | 0.00 |
Reinforcement learning methods, surviving the control difficulties of the unknown environment, are gaining more and more popularity recently in the autonomous robotics community. One of the possible difficulties of the reinforcement learning applications in complex situations is the huge size of the state-value- or action-value-function representation [2]. The case of continuous environment (continuous valued) reinforcement learning could be even complicated, as the state-value- or action-value-functions are turning into continuous functions. In this paper we suggest a way for tackling these difficulties by the application of SVD (Singular Value Decomposition) methods [3], [4], [15], [26].