Feature-based methods for large scale dynamic programming
Machine Learning - Special issue on reinforcement learning
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Neuro-Dynamic Programming
Hierarchical control and learning for markov decision processes
Hierarchical control and learning for markov decision processes
The Linear Programming Approach to Approximate Dynamic Programming
Operations Research
On Constraint Sampling in the Linear Programming Approach to Approximate Dynamic Programming
Mathematics of Operations Research
Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in Probability and Statistics)
Hi-index | 0.00 |
In this article we develop techniques for applying Approximate Dynamic Programming (ADP) to the control of time-varying queuing systems. First, we show that the classical state space representation in queuing systems leads to approximations that can be significantly improved by increasing the dimensionality of the state space by state disaggregation. Second, we deal with time-varying parameters by adding them to the state space with an ADP parameterization. We demonstrate these techniques for the optimal admission control in a retrial queue with abandonments and time-varying parameters. The numerical experiments show that our techniques have near to optimal performance.