Simulation output analysis using standardized time series
Mathematics of Operations Research
Optimal mean-squared-error batch sizes
Management Science
Minimal-MSE linear combinations of variance estimators of the sample mean
WSC '88 Proceedings of the 20th conference on Winter simulation
On the relationship between batch means, overlapping means and spectral estimation
WSC '87 Proceedings of the 19th conference on Winter simulation
Confidence intervals using orthonormally weighted standardized time series
ACM Transactions on Modeling and Computer Simulation (TOMACS)
A spectral method for confidence interval generation and run length control in simulations
Communications of the ACM - Special issue on simulation modeling and statistical computing
Simulation output analysis via dynamic batch means
Proceedings of the 32nd conference on Winter simulation
Overlapping batch means: something for nothing?
WSC '84 Proceedings of the 16th conference on Winter simulation
Data streaming algorithms for estimating entropy of network traffic
SIGMETRICS '06/Performance '06 Proceedings of the joint international conference on Measurement and modeling of computer systems
A data streaming algorithm for estimating entropies of od flows
Proceedings of the 7th ACM SIGCOMM conference on Internet measurement
Thirty years of "batch size effects"
Proceedings of the Winter Simulation Conference
Hi-index | 0.00 |
Estimating the variance of the sample mean from a stochastic process is essential in assessing the quality of using the sample mean to estimate the population mean which is the fundamental question in simulation experiments. Most existing studies for estimating the variance of the sample mean from simulation output assume simulation run length is known in advance. This paper proposes an implementable batch-size selection procedure for estimating the variance of the sample mean without requiring that the sample size or simulation run length a priori.