Learning to Perceive and Act by Trial and Error
Machine Learning
Reinforcement learning with dynamic covering of state-action space: partitioning Q-learning
SAB94 Proceedings of the third international conference on Simulation of adaptive behavior : from animals to animats 3: from animals to animats 3
Genetic Evolution of a Logic Circuit which Controls an Autonomous Mobile Robot
ICES '96 Proceedings of the First International Conference on Evolvable Systems: From Biology to Hardware
Q-Learning with Adaptive State Segmentation (QLASS)
CIRA '97 Proceedings of the 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation
Input generalization in delayed reinforcement learning: an algorithm and performance comparisons
IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 2
Hi-index | 0.00 |
In this paper, we propose Q-learning with adaptive state space construction. This provides an efficient method to construct the state space suitable for Q-learning to accomplish the task in continuous sensor space. In the proposed algorithm, a robot starts with single state covering whole sensor space. A new state is generated incrementally by segmenting a sub-region of the sensor space or combining the existing states. The criterion for incremental segmentation and combination is derived from Q-learning algorithm. Simulation results show that the proposed algortithm is able to construct the sensor space effectively to accomplish the task. The resulting state space reveals the sensor space in a Voronoi tessellation.