A fully sequential procedure for indifference-zone selection in simulation
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Proceedings of the 33nd conference on Winter simulation
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Selecting the best system: selecting the best system: theory and methods
Proceedings of the 35th conference on Winter simulation: driving innovation
Efficient simulation procedures: comparison with a standard via fully sequential procedures
Proceedings of the 35th conference on Winter simulation: driving innovation
Comparison with a standard via fully sequential procedures
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Control variates for screening, selection, and estimation of the best
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Finding probably better system configurations quickly
SIGMETRICS '06/Performance '06 Proceedings of the joint international conference on Measurement and modeling of computer systems
Statistical selection of the best system
WSC '05 Proceedings of the 37th conference on Winter simulation
Determination of the "best" system that meets a limit standard
WSC '05 Proceedings of the 37th conference on Winter simulation
Combined ranking and selection with control variates
Proceedings of the 38th conference on Winter simulation
Comparison of limit standards using a sequential probability ratio test
Proceedings of the 38th conference on Winter simulation
Performance evaluations of comparison-with-a-standard procedures
Proceedings of the 38th conference on Winter simulation
State-of-the-Art Review: A User's Guide to the Brave New World of Designing Simulation Experiments
INFORMS Journal on Computing
Economic Analysis of Simulation Selection Problems
Management Science
Industrial strength COMPASS: A comprehensive algorithm and software for optimization via simulation
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Simulation optimization with hybrid golden region search
Winter Simulation Conference
Hi-index | 0.01 |
We consider the problem of comparing a finite number of stochastic systems with respect to a single system (designated as the "standard") via simulation experiments. The comparison is based on expected performance, and the goal is to determine if any system has larger expected performance than the standard, and if so to identify the best of the alternatives. In this paper we provide two-stage experiment design and analysis procedures to solve the problem for a variety of scenarios, including those in which we encounter unequal variances across systems, as well as those in which we use the variance reduction technique of common random numbersand it is appropriate to do so. The emphasis is added because in some cases common random numbers can be counterproductive when performing comparisons with a standard. We also provide methods for estimating the critical constants required by our procedures, present a portion of an extensive empirical study, and demonstrate one of the procedures via a numerical example.