ACM Transactions on Programming Languages and Systems (TOPLAS)
Efficient algorithms for distributed snapshots and global virtual time approximation
Journal of Parallel and Distributed Computing - Special issue on parallel and discrete event simulation
GTW: a time warp system for shared memory multiprocessors
WSC '94 Proceedings of the 26th conference on Winter simulation
Adaptive flow control in time warp
Proceedings of the eleventh workshop on Parallel and distributed simulation
Parallel and Distribution Simulation Systems
Parallel and Distribution Simulation Systems
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
DVS: An Object-Oriented Framework for Distributed Verilog Simulation
Proceedings of the seventeenth workshop on Parallel and distributed simulation
Optimizing time warp simulation with reinforcement learning techniques
Proceedings of the 39th conference on Winter simulation: 40 years! The best is yet to come
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Using genetic algorithms to limit the optimism in time warp
Winter Simulation Conference
Hi-index | 0.00 |
In a Time-Warp-based distributed simulation system, a simulation process must save its states and events to handle rollbacks. Periodically, the global minimum of the timestamps of events and messages in the entire system is calculated. This value is known as the global virtual time (GVT), and it plays an important role in a Time Warp system. GVT is only computed periodically because of the computation overhead. An important problem is to determine the optimal interval between two GVT computations. In this paper we present a new approach that uses a simple Reinforcement Learning technique to select the optimal GVT interval. Used in a Time-Warp-based distributed VLSI simulation system, our method was successful in selecting good GVT interval and improving the system's performance.