Dynamic Programming and Optimal Control
Dynamic Programming and Optimal Control
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Neuro-Dynamic Programming
Handbook of Learning and Approximate Dynamic Programming (IEEE Press Series on Computational Intelligence)
A framework for standard modular simulation in semiconductor wafer fabrication systems
WSC '05 Proceedings of the 37th conference on Winter simulation
Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in Probability and Statistics)
Control of a Re-Entrant Line Manufacturing Model with a Reinforcement Learning Approach
ICMLA '07 Proceedings of the Sixth International Conference on Machine Learning and Applications
Control Techniques for Complex Networks
Control Techniques for Complex Networks
Brief paper: Average cost temporal-difference learning
Automatica (Journal of IFAC)
Proceedings of the Winter Simulation Conference
Hi-index | 0.01 |
This paper presents initial results on the application of a simulation-based Approximate Dynamic Programming (ADP) for the control of the benchmark model of a semiconductor fab denominated the Intel Mini-Fab. The ADP approach utilized is based on an Average Cost Temporal-Difference TD(λ) learning algorithm and under an Actor-Critic architecture. Results from simulation experiments, on which both policies generated via ADP and commonly utilized dispatching rules were utilized in the Mini-Fab, demonstrated that ADP yielded policies that provided a good performance in average Work-In-Process and average Cycle Time with respect to the dispatching rules considered.