The weighted majority algorithm
Information and Computation
Exponentiated gradient versus gradient descent for linear predictors
Information and Computation
Journal of the ACM (JACM)
Online computation and competitive analysis
Online computation and competitive analysis
Universal Portfolios With and Without Transaction Costs
Machine Learning - Special issue: computational learning theory, COLT '97
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Neuro-Dynamic Programming
Near-Optimal Reinforcement Learning in Polynomial Time
Machine Learning
Efficient algorithms for online decision problems
Journal of Computer and System Sciences - Special issue: Learning theory 2003
Combining expert advice in reactive environments
Journal of the ACM (JACM)
Robust Control of Markov Decision Processes with Uncertain Transition Matrices
Operations Research
Markov Decision Processes with Arbitrary Reward Processes
Mathematics of Operations Research
NP-Hardness of checking the unichain condition in average cost MDPs
Operations Research Letters
Near-optimal Regret Bounds for Reinforcement Learning
The Journal of Machine Learning Research
Stochastic control of the scalable high performance distributed computations
PPAM'11 Proceedings of the 9th international conference on Parallel Processing and Applied Mathematics - Volume Part II
Hi-index | 0.00 |
We consider a Markov decision process (MDP) setting in which the reward function is allowed to change after each time step (possibly in an adversarial manner), yet the dynamics remain fixed. Similar to the experts setting, we address the question of how well an agent can do when compared to the reward achieved under the best stationary policy over time. We provide efficient algorithms, which have regret bounds with no dependence on the size of state space. Instead, these bounds depend only on a certain horizon time of the process and logarithmically on the number of actions.