Machine Learning - Special issue on context sensitivity and concept drift
The O.D. E. Method for Convergence of Stochastic Approximation and Reinforcement Learning
SIAM Journal on Control and Optimization
Dynamic Programming and Optimal Control, Two Volume Set
Dynamic Programming and Optimal Control, Two Volume Set
Neuro-Dynamic Programming
The Nonstochastic Multiarmed Bandit Problem
SIAM Journal on Computing
The empirical Bayes envelope and regret minimization in competitive Markov decision processes
Mathematics of Operations Research
R-max - a general polynomial time algorithm for near-optimal reinforcement learning
The Journal of Machine Learning Research
Efficient algorithms for online decision problems
Journal of Computer and System Sciences - Special issue: Learning theory 2003
Prediction, Learning, and Games
Prediction, Learning, and Games
On sequential strategies for loss functions with memory
IEEE Transactions on Information Theory
Hi-index | 0.00 |
We consider a control problem where the decision maker interacts with a standard Markov decision process with the exception that the reward functions vary arbitrarily over time. We extend the notion of Hannan consistency to this setting, showing that, in hindsight, the agent can perform almost as well as every deterministic policy. We present efficient online algorithms in the spirit of reinforcement learning that ensure that the agent's performance loss, or regret, vanishes over time, provided that the environment is oblivious to the agent's actions. However, counterexamples indicate that the regret does not vanish if the environment is not oblivious.