Linear least-squares algorithms for temporal difference learning
Machine Learning - Special issue on reinforcement learning
Learning agents for uncertain environments (extended abstract)
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Least Squares Policy Evaluation Algorithms with Linear Function Approximation
Discrete Event Dynamic Systems
Algorithms for Inverse Reinforcement Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Least-squares policy iteration
The Journal of Machine Learning Research
Apprenticeship learning via inverse reinforcement learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
Apprenticeship learning using linear programming
Proceedings of the 25th international conference on Machine learning
Maximum entropy inverse reinforcement learning
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Bayesian inverse reinforcement learning
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Hi-index | 0.00 |
This paper addresses the problem of apprenticeship learning, that is learning control policies from demonstration by an expert. An efficient framework for it is inverse reinforcement learning (IRL). Based on the assumption that the expert maximizes a utility function, IRL aims at learning the underlying reward from example trajectories. Many IRL algorithms assume that the reward function is linearly parameterized and rely on the computation of some associated feature expectations , which is done through Monte Carlo simulation. However, this assumes to have full trajectories for the expert policy as well as at least a generative model for intermediate policies. In this paper, we introduce a temporal difference method, namely LSTD-μ , to compute these feature expectations. This allows extending apprenticeship learning to a batch and off-policy setting.