From Perturbation Analysis to Markov Decision Processes and Reinforcement Learning
Discrete Event Dynamic Systems
Hierarchical Average Reward Reinforcement Learning
The Journal of Machine Learning Research
Simulation-Based Optimization Approach for Software Cost Model with Rejuvenation
ATC '08 Proceedings of the 5th international conference on Autonomic and Trusted Computing
Opportunistic Transmission over Randomly Varying Channels
Network Control and Optimization
Reinforcement Learning: A Tutorial Survey and Recent Advances
INFORMS Journal on Computing
Natural actor-critic algorithms
Automatica (Journal of IFAC)
Integrating a partial model into model free reinforcement learning
The Journal of Machine Learning Research
Hi-index | 0.00 |
This paper gives the first rigorous convergence analysis of analogues of Watkins's Q-learning algorithm, applied to average cost control of finite-state Markov chains. We discuss two algorithms which may be viewed as stochastic approximation counterparts of two existing algorithms for recursively computing the value function of the average cost problem---the traditional relative value iteration (RVI) algorithm and a recent algorithm of Bertsekas based on the stochastic shortest path (SSP) formulation of the problem. Both synchronous and asynchronous implementations are considered and analyzed using the ODE method. This involves establishing asymptotic stability of associated ODE limits. The SSP algorithm also uses ideas from two-time-scale stochastic approximation.