Adaptive fuzzy systems and control: design and stability analysis
Adaptive fuzzy systems and control: design and stability analysis
Reinforcement learning with replacing eligibility traces
Machine Learning - Special issue on reinforcement learning
Incremental multi-step Q-learning
Machine Learning - Special issue on reinforcement learning
Neuro-fuzzy and soft computing: a computational approach to learning and machine intelligence
Neuro-fuzzy and soft computing: a computational approach to learning and machine intelligence
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Machine Learning
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Fuzzy inference system learning by reinforcement methods
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Rapid, safe, and incremental learning of navigation strategies
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Online tuning of fuzzy inference systems using dynamic fuzzy Q-learning
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Self-learning fuzzy logic controllers for pursuit-evasion differential games
Robotics and Autonomous Systems
Hi-index | 0.00 |
In this paper, we present a new approach of controlling a mobile robot using the Dynamic Fuzzy Q-Learning method. Self-organizing fuzzy inference is introduced to calculate actions and Q-functions so as to enable us to deal with continuous-valued states and actions. Fuzzy rules can be generated automatically when necessary. Fuzzy inference systems provide a natural mean of incorporating the bias components for rapid reinforcement learning. Eligibility trace method is incorporated into our algorithm, leading to faster learning and also help to alleviate the experimentation-sensitive problem that an arbitrarily bad training policy might result in poor learning. Experimental results demonstrate that the robot is able to learn the right policy with a few of trials.