Fuzzy inference system learning by reinforcement methods

  • Authors:
  • L. Jouffe

  • Affiliations:
  • SODALEC Electron., Inst. Nat. des Sci. Appliques, Rennes

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

Fuzzy Actor-Critic Learning (FACL) and Fuzzy Q-Learning (FQL) are reinforcement learning methods based on dynamic programming (DP) principles. In the paper, they are used to tune online the conclusion part of fuzzy inference systems (FIS). The only information available for learning is the system feedback, which describes in terms of reward and punishment the task the fuzzy agent has to realize. At each time step, the agent receives a reinforcement signal according to the last action it has performed in the previous state. The problem involves optimizing not only the direct reinforcement, but also the total amount of reinforcements the agent can receive in the future. To illustrate the use of these two learning methods, we first applied them to a problem that involves finding a fuzzy controller to drive a boat from one bank to another, across a river with a strong nonlinear current. Then, we used the well known Cart-Pole Balancing and Mountain-Car problems to be able to compare our methods to other reinforcement learning methods and focus on important characteristic aspects of FACL and FQL. We found that the genericity of our methods allows us to learn every kind of reinforcement learning problem (continuous states, discrete/continuous actions, various type of reinforcement functions). The experimental studies also show the superiority of these methods with respect to the other related methods we can find in the literature