State-of-the-Art Review: A User's Guide to the Brave New World of Designing Simulation Experiments

  • Authors:
  • Jack P. C. Kleijnen;Susan M. Sanchez;Thomas W. Lucas;Thomas M. Cioppa

  • Affiliations:
  • Department of Information Systems and Management/Center for Economic Research (CentER), Tilburg University (UvT), Postbox 90153, 5000 LE Tilburg, The Netherlands;Operations Research Department and the Graduate School of Business and Public Policy, Naval Postgraduate School, Monterey, California 93943-5219, USA;Operations Research Department, Naval Postgraduate School, Monterey, California 93943-5219, USA;U.S. Army Training and Doctrine Command Analysis Center, Naval Postgraduate School, PO Box 8692, Monterey, California 93943-0692, USA

  • Venue:
  • INFORMS Journal on Computing
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many simulation practitioners can get more from their analyses by using the statistical theory on design of experiments (DOE) developed specifically for exploring computer models. We discuss a toolkit of designs for simulators with limited DOE expertise who want to select a design and an appropriate analysis for their experiments. Furthermore, we provide a research agenda listing problems in the design of simulation experiments-as opposed to real-world experiments-that require more investigation. We consider three types of practical problems: (1) developing a basic understanding of a particular simulation model or system, (2) finding robust decisions or policies as opposed to so-called optimal solutions, and (3) comparing the merits of various decisions or policies. Our discussion emphasizes aspects that are typical for simulation, such as having many more factors than in real-world experiments, and the sequential nature of the data collection. Because the same problem type may be addressed through different design types, we discuss quality attributes of designs, such as the ease of design construction, the flexibility for analysis, and efficiency considerations. Moreover, the selection of the design type depends on the metamodel (response surface) that the analysts tentatively assume; for example, complicated metamodels require more simulation runs. We present several procedures to validate the metamodel estimated from a specific design, and we summarize a case study illustrating several of our major themes. We conclude with a discussion of areas that merit more work to achieve the potential benefits-either via new research or incorporation into standard simulation or statistical packages.