Technical Note: \cal Q-Learning
Machine Learning
Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning
Artificial Intelligence
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Recent Advances in Hierarchical Reinforcement Learning
Discrete Event Dynamic Systems
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Hierarchical control and learning for markov decision processes
Hierarchical control and learning for markov decision processes
Hierarchical reinforcement learning with the MAXQ value function decomposition
Journal of Artificial Intelligence Research
Reinforcement Learning and Dynamic Programming Using Function Approximators
Reinforcement Learning and Dynamic Programming Using Function Approximators
Hi-index | 0.00 |
How to improve the learning efficiency and optimize the encapsulation of subtasks is a key problem that hierarchical reinforcement learning needs to solve. This paper proposes a modular hierarchical reinforcement learning al-gorithm, named MHRL, in which the modularized hierarchical subtasks are trained by their independent reward systems. During learning, the MHRL pro-duces an optimization strategy for different modular layers, which makes inde-pendent modules be able to concurrently execute. In addition, this paper pre-sents some experimental results for solving application problems with nested learning processes. The results show that the MHRL can increase learning reus-ability and improve learning efficiency dramatically.