Large-sample theory for standardized time series: an overview

  • Authors:
  • Peter W. Glynn;Donald L. Iglehart

  • Affiliations:
  • Department of Industrial Engineering, University of Wisconsin, Madison, Wisconsin;Department of Operations Research, Stanford University, Stanford, California

  • Venue:
  • WSC '85 Proceedings of the 17th conference on Winter simulation
  • Year:
  • 1985

Quantified Score

Hi-index 0.00

Visualization

Abstract

There are two basic approaches to constructing confidence intervals for steady-state parameters from a single simul t on run. The fir t s to consistently estimate the variance constant in the relevant central limit theorem. This is the approach used in the regenerative, spectral, and autoregressive methods. The second approach (standardized time series, STS) due to SCHRUBEN [10] is to “cancel out” the variance constant. This second approach contains the batch means method as a special case. Our goal in this paper is to discuss the large-simple properties of the confidence intervals generated by the STS method. In particular, the asymptotic (as run size becomes large) expected value and variance of the length of these confidence intervals is studied and shown to be inferior to the behavior manifested by intervals constructed using the first approach.