Output analysis: on choosing a single criterion for confidence-interval procedures

  • Authors:
  • Bruce Schmeiser;Yingchieh Yeh

  • Affiliations:
  • Purdue University, West Lafayette, IN;Yuan Ze University, Taoyuan

  • Venue:
  • Proceedings of the 34th conference on Winter simulation: exploring new frontiers
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

Stating a confidence interval is a traditional method of indicating the sampling error of a point estimator of a model's performance measure. We propose a single dimensionless criterion, inspired by Schruben's coverage function, for evaluating and comparing the statistical quality of confidence-interval procedures. Procedure quality is usually thought to be multidimensional, composed of the mean (and maybe the variance) of the interval-width distribution and the probability of covering the performance measure (and maybe other values). Our criterion, which we argue lies at the heart of what makes a confidence-interval procedure good or bad, compares a given procedure's intervals to those of an "ideal" procedure. For a given point estimator (such as the sample mean) and given experimental data process (such as a first-order autoregressive process with specified parameters), our single criterion is a function of only the sample size (or other rule that ends sampling).