Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Automatic Feature Selection for Model-Based Reinforcement Learning in Factored MDPs
ICMLA '09 Proceedings of the 2009 International Conference on Machine Learning and Applications
Structural knowledge transfer by spatial abstraction for reinforcement learning agents
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Stochastic abstract policies for knowledge transfer in robotic navigation tasks
MICAI'11 Proceedings of the 10th Mexican international conference on Advances in Artificial Intelligence - Volume Part I
Hi-index | 0.00 |
In problems modeled as Markov Decision Processes (MDP), knowledge transfer is related to the notion of generalization and state abstraction. Abstraction can be obtained through factored representation by describing states with a set of features. Thus, the definition of the best action to be taken in a state can be easily transferred to similar states, i.e., states with similar features. In this paper we compare forward and backward greedy feature selection to find an appropriate compact set of features for such abstraction, thus facilitating the transfer of knowledge to new problems. We also present heuristic versions of both approaches and compare all of the approaches within a discrete simulated navigation problem.