How to assess the acceptability and credibility of simulation results
WSC '89 Proceedings of the 21st conference on Winter simulation
Descriptive sampling: an improvement over Latin hypercube sampling
Proceedings of the 29th conference on Winter simulation
WSC '91 Proceedings of the 23rd conference on Winter simulation
Validation of models: statistical techniques and data availability
Proceedings of the 31st conference on Winter simulation: Simulation---a bridge to the future - Volume 1
Concepts and criteria to assess acceptability of simulation studies: a frame of reference
Communications of the ACM - Special issue on simulation modeling and statistical computing
A methodology for certification of modeling and simulation applications
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Proceedings of the 32nd conference on Winter simulation
The IEEE Standard Dictionary of Electrical and Electronics Terms
The IEEE Standard Dictionary of Electrical and Electronics Terms
Validation criteria for computer system simulations
ANSS '75 Proceedings of the 3rd symposium on Simulation of computer systems
Experimental design for simulation: experimental design for simulation
Proceedings of the 35th conference on Winter simulation: driving innovation
Study on simulation credibility metrics
WSC '05 Proceedings of the 37th conference on Winter simulation
Design and Modeling for Computer Experiments (Computer Science & Data Analysis)
Design and Modeling for Computer Experiments (Computer Science & Data Analysis)
Work smarter, not harder: guidelines for designing simulation experiments
Proceedings of the 39th conference on Winter simulation: 40 years! The best is yet to come
Hi-index | 0.00 |
Acceptability criteria establish the measures against which to judge the appropriateness of a simulation for an intended use. The quality of any products from VV&A processes depend directly upon the quality of the acceptability criteria they use. Much is written on the properties of good acceptability criteria but little advice exists in this literature for how to develop them. This paper describes one effort to develop the acceptability criteria for a major simulation development program and the lessons learned from that effort. The process that this effort employed involved decomposing the information in the available requirements documents (i.e., a Capability Development Document, a Performance Specification and a requirements model) into individual requirements statements, determining from those statements the required functional and system capabilities, identifying any requirements that define specific performance metrics, integrating that information into observable acceptability thresholds and checking the consistency of those thresholds with the other acceptability criteria and the original requirements statements. The resulting acceptability criteria were then circulated amongst a group of subject matter experts to verify the necessity and completeness of their characterization of the users' needs. This process produced high-resolution, traceable and defensible acceptability criteria that could supply detailed insight into the simulation's capabilities in the context of a user's needs. However, many outside factors can influence the acceptability criteria process including program office preferences, testing limitations, developer concerns and user representative concerns. All of these factors must be mediated to produce acceptability criteria that are acceptable to a majority of these influential players.