Steady-state simulation of queueing processes: survey of problems and solutions
ACM Computing Surveys (CSUR)
On strong consistency of the variance estimator
WSC '87 Proceedings of the 19th conference on Winter simulation
Review of advanced methods for simulation output analysis
WSC '05 Proceedings of the 37th conference on Winter simulation
A comprehensive review of methods for simulation output analysis
Proceedings of the 38th conference on Winter simulation
Statistical analysis of simulation output: state of the art
Proceedings of the 39th conference on Winter simulation: 40 years! The best is yet to come
Kernel estimation for quantile sensitivities
Proceedings of the 39th conference on Winter simulation: 40 years! The best is yet to come
Hi-index | 0.00 |
There are two basic approaches to constructing confidence intervals for steady-state parameters from a single simul t on run. The fir t s to consistently estimate the variance constant in the relevant central limit theorem. This is the approach used in the regenerative, spectral, and autoregressive methods. The second approach (standardized time series, STS) due to SCHRUBEN [10] is to “cancel out” the variance constant. This second approach contains the batch means method as a special case. Our goal in this paper is to discuss the large-simple properties of the confidence intervals generated by the STS method. In particular, the asymptotic (as run size becomes large) expected value and variance of the length of these confidence intervals is studied and shown to be inferior to the behavior manifested by intervals constructed using the first approach.