Graphical methods for evaluating and comparing confidence-interval procedures
Operations Research
Optimal mean-squared-error batch sizes
Management Science
On the small-sample optimality of multiple-regeneration estimators
Proceedings of the 31st conference on Winter simulation: Simulation---a bridge to the future - Volume 1
Output interpretation: some myths and common errors in simulation experiments
Proceedings of the 33nd conference on Winter simulation
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Simulation output analysis: a tutorial based on one research thread
WSC '04 Proceedings of the 36th conference on Winter simulation
A practitioner, a vender, and a researcher walk into a bar: trying to explain what researchers do
Proceedings of the 40th Conference on Winter Simulation
Analysis of sequential stopping rules
Winter Simulation Conference
Finite-Sample Performance of Absolute Precision Stopping Rules
INFORMS Journal on Computing
Hi-index | 0.00 |
Stating a confidence interval is a traditional method of indicating the sampling error of a point estimator of a model's performance measure. We propose a single dimensionless criterion, inspired by Schruben's coverage function, for evaluating and comparing the statistical quality of confidence-interval procedures. Procedure quality is usually thought to be multidimensional, composed of the mean (and maybe the variance) of the interval-width distribution and the probability of covering the performance measure (and maybe other values). Our criterion, which we argue lies at the heart of what makes a confidence-interval procedure good or bad, compares a given procedure's intervals to those of an "ideal" procedure. For a given point estimator (such as the sample mean) and given experimental data process (such as a first-order autoregressive process with specified parameters), our single criterion is a function of only the sample size (or other rule that ends sampling).