Management Science
Asymptotic formulas for Markov processes with applications to simulation
Operations Research
Theory of Modelling and Simulation
Theory of Modelling and Simulation
Simulation Modeling and Analysis
Simulation Modeling and Analysis
Computation of the Asymptotic Bias and Variance for Simulation of Markov Reward Models
SS '96 Proceedings of the 29th Annual Simulation Symposium (SS '96)
Initial bias and estimation error in discrete event simulation
WSC '82 Proceedings of the 14th conference on Winter Simulation - Volume 2
Simulating markov-reward processes with rare events
ACM Transactions on Modeling and Computer Simulation (TOMACS)
When, and when not to use warm-up periods in discrete event simulation
Proceedings of the 2nd International Conference on Simulation Tools and Techniques
Transient solutions for multi-server queues with finite buffers
Queueing Systems: Theory and Applications
Rethinking the initialization bias problem in steady-state discrete event simulation
Proceedings of the Winter Simulation Conference
Modeling and analyzing transient military air traffic control
Proceedings of the Winter Simulation Conference
Hi-index | 0.00 |
The question of how long to run a discrete event simulation before data collection starts is an important issue when estimating steady-state performance measures such as average queue lengths. By using experiments based on numerical (nonsimulation) methods published elsewhere, we shed light on this question. Our experiments indicate that no initialization phase should be used when starting in state with a reasonable high equilibrium probability. Delaying data collection is only justified if the starting state is highly unlikely, and data collection should start as soon as a system enters a state with reasonably high probability.