Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning
Artificial Intelligence
Speeding-up Reinforcement Learning with Multi-step Actions
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Hierarchical control and learning for markov decision processes
Hierarchical control and learning for markov decision processes
Hierarchical reinforcement learning with the MAXQ value function decomposition
Journal of Artificial Intelligence Research
Model-free learning control of neutralization processes using reinforcement learning
Engineering Applications of Artificial Intelligence
Abstraction Level Regulation of Cognitive Processing Through Emotion-Based Attention Mechanisms
Attention in Cognitive Systems. Theories and Systems from an Interdisciplinary Viewpoint
Model-free control based on reinforcement learning for a wastewater treatment problem
Applied Soft Computing
Hi-index | 0.00 |
In reinforcement learning the interaction between the agent and the environment generally takes place on a fixed time scale, which means that the control interval is set to a fixed time step. In order to determine a suitable fixed time scale one has to trade off accuracy in control against learning complexity. In this paper, we present an alternative approach that enables the agent to learn a control policy by using multiple time scales simultaneously. Instead of preselecting a fixed time scale, there are several time scales available during learning and the agent can select the appropriate time scale depending on the system state. The different time scales are multiples of a finest time scale which is denoted as the primitive time scale. Actions on a coarser time scale consist of several identical actions on the primitive time scale and are called multistep actions (MSAs). The special structure of these actions is efficiently exploited in our recently proposed MSA-Q-learning algorithm. In this paper, we use the MSAs to learn a control policy for a thermostat control problem. Our algorithm yields a fast and highly accurate control policy; in contrast, the standard Q-learning algorithms without MSAs fails to learn any useful control policy for this problem.