Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Off-Policy Temporal Difference Learning with Function Approximation
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Learning from Scarce Experience
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Eligibility Traces for Off-Policy Policy Evaluation
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Policy Improvement for POMDPs Using Normalized Importance Sampling
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Least-squares policy iteration
The Journal of Machine Learning Research
Covariate Shift Adaptation by Importance Weighted Cross Validation
The Journal of Machine Learning Research
Geodesic Gaussian kernels for value function approximation
Autonomous Robots
Efficient Sample Reuse in EM-Based Policy Search
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part I
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Density Ratio Estimation: A New Versatile Tool for Machine Learning
ACML '09 Proceedings of the 1st Asian Conference on Machine Learning: Advances in Machine Learning
A Least-squares Approach to Direct Importance Estimation
The Journal of Machine Learning Research
Semi-supervised speaker identification under covariate shift
Signal Processing
Hi-index | 0.01 |
Off-policy reinforcement learning is aimed at efficiently reusing data samples gathered in the past, which is an essential problem for physically grounded AI as experiments are usually prohibitively expensive. A common approach is to use importance sampling techniques for compensating for the bias caused by the difference between data-sampling policies and the target policy. However, existing off-policy methods do not often take the variance of value function estimators explicitly into account and therefore their performance tends to be unstable. To cope with this problem, we propose using an adaptive importance sampling technique which allows us to actively control the trade-off between bias and variance. We further provide a method for optimally determining the trade-off parameter based on a variant of cross-validation. We demonstrate the usefulness of the proposed approach through simulations.