Technical Note: \cal Q-Learning
Machine Learning
Incremental multi-step Q-learning
Machine Learning - Special issue on reinforcement learning
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
The FAST Architecture: A Neural Network with Flexible Adaptable-Size Topology
MICRONEURO '96 Proceedings of the 5th International Conference on Microelectronics for Neural Networks and Fuzzy Systems
Reinforcement learning based on local state feature learning and policy adjustment
Information Sciences—Informatics and Computer Science: An International Journal - Special issue: Introduction to multimedia and mobile agents
Anti-swing and positioning control of overhead traveling crane
Information Sciences: an International Journal
A generic architecture for adaptive agents based on reinforcement learning
Information Sciences—Informatics and Computer Science: An International Journal - Special issue: Bio-inspired systems (BIS)
Self-organizing learning array and its application to economic and financial problems
Information Sciences: an International Journal
A fuzzy Actor-Critic reinforcement learning network
Information Sciences: an International Journal
A novel approach for multi-agent-based Intelligent Manufacturing System
Information Sciences: an International Journal
Cooperative strategy based on adaptive Q-learning for robot soccer systems
IEEE Transactions on Fuzzy Systems
Induced states in a decision tree constructed by Q-learning
Information Sciences: an International Journal
Information Sciences: an International Journal
Information Sciences: an International Journal
Information Sciences: an International Journal
Policy sharing between multiple mobile robots using decision trees
Information Sciences: an International Journal
Hi-index | 0.07 |
This work describes a novel algorithm that integrates an adaptive resonance method (ARM), i.e. an ART-based algorithm with a self-organized design, and a Q-learning algorithm. By dynamically adjusting the size of sensitivity regions of each neuron and adaptively eliminating one of the redundant neurons, ARM can preserve resources, i.e. available neurons, to accommodate additional categories. As a dynamic programming-based reinforcement learning method, Q-learning involves use of the learned action-value function, Q, which directly approximates Q^*, i.e. the optimal action-value function, which is independent of the policy followed. In the proposed algorithm, ARM functions as a cluster to classify input vectors from the outside world. Clustered results are then sent to the Q-learning design in order to learn how to implement the optimum actions to the outside world. Simulation results of the well-known control algorithm of balancing an inverted pendulum on a cart demonstrates the effectiveness of the proposed algorithm.