Q-Learning with Adaptive State Space Construction
EWLR-6 Proceedings of the 6th European Workshop on Learning Robots
Fuzzy Q-Learning with the modified fuzzy ART neural network
Web Intelligence and Agent Systems
State Space Partition for Reinforcement Learning Based on Fuzzy Min-Max Neural Network
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Part II--Advances in Neural Networks
State space segmentation for acquisition of agent behavior
Web Intelligence and Agent Systems
CIRA'09 Proceedings of the 8th IEEE international conference on Computational intelligence in robotics and automation
A multi-agent reinforcement learning approach to robot soccer
Artificial Intelligence Review
Hi-index | 0.00 |
Q-learning is an efficient algorithm to acquire adaptive behavior of the robot without a priori knowledge of the sensor space and the task. However, there is a problem in applying the Q-learning to the task in the real world. How to construct the state space suitable for the Q-learning without the knowledge of the sensor space? In this paper, we propose Q-learning with adaptive state segmentation (QLASS). QLASS provides a method to segment the sensor space incrementally based on sensor vectors and reinforcement signals. Experimental results show that QLASS can segment the sensor space effectively to accomplish the task. Furthermore, we show the obtained state space reveals the fitness landscape.