A framework for assessing uncertainties in simulation predictions
Physica D - Special issue originating from the 18th Annual International Conference of the Center for Nonlinear Studies, Los Alamos, NM, May 11&mdash ;15, 1998
AIAA Guide for the Verification and Validation of Computational Fluid Dynamics Simulations
AIAA Guide for the Verification and Validation of Computational Fluid Dynamics Simulations
Random Data: Analysis and Measurement Procedures
Random Data: Analysis and Measurement Procedures
Measures of agreement between computation and experiment: validation metrics
Journal of Computational Physics - Special issue: Uncertainty quantification in simulation science
Design and Analysis of Experiments
Design and Analysis of Experiments
Measures of agreement between computation and experiment: validation metrics
Journal of Computational Physics - Special issue: Uncertainty quantification in simulation science
Padé-Legendre approximants for uncertainty analysis with discontinuous response surfaces
Journal of Computational Physics
Defining predictive maturity for validated numerical simulations
Computers and Structures
An automatic model calibration method for occupant restraint systems
Structural and Multidisciplinary Optimization
Validation of nuclear models used in space radiation shielding applications
Journal of Computational Physics
Panel discussion: integrating data from multiple simulation models of different fidelity
Proceedings of the Winter Simulation Conference
Development of a programme for the quantitative comparison of a pair of curves
International Journal of Computer Applications in Technology
Hi-index | 0.01 |
With the increasing role of computational modeling in engineering design, performance estimation, and safety assessment, improved methods are needed for comparing computational results and experimental measurements. Traditional methods of graphically comparing computational and experimental results, though valuable, are essentially qualitative. Computable measures are needed that can quantitatively compare computational and experimental results over a range of input, or control, variables to sharpen assessment of computational accuracy. This type of measure has been recently referred to as a validation metric. We discuss various features that we believe should be incorporated in a validation metric, as well as features that we believe should be excluded. We develop a new validation metric that is based on the statistical concept of confidence intervals. Using this fundamental concept, we construct two specific metrics: one that requires interpolation of experimental data and one that requires regression (curve fitting) of experimental data. We apply the metrics to three example problems: thermal decomposition of a polyurethane foam, a turbulent buoyant plume of helium, and compressibility effects on the growth rate of a turbulent free-shear layer. We discuss how the present metrics are easily interpretable for assessing computational model accuracy, as well as the impact of experimental measurement uncertainty on the accuracy assessment.