New faster Kernighan-Lin-type graph-partitioning algorithms
ICCAD '93 Proceedings of the 1993 IEEE/ACM international conference on Computer-aided design
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Discovering Hierarchy in Reinforcement Learning with HEXQ
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Reinforcement learning with a hierarchy of abstract models
AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
Hi-index | 0.00 |
HEXQ is a reinforcement learning algorithm that decomposes a problem into subtasks and constructs a hierarchy using state variables. The maximum number of levels is constrained by the number of variables representing a state. In HEXQ, values learned for a subtask can be reused in different contexts if the subtasks are identical. If not, values for non-identical subtasks need to be trained separately. This paper introduces a method that tackles these two restrictions. Experimental results show that this method can save the training time dramatically.