An Efficient Approach to Deal with the Curse of Dimensionality in Sensitivity Analysis Computations

  • Authors:
  • Marco Ratto;Andrea Saltelli

  • Affiliations:
  • -;-

  • Venue:
  • ICCS '02 Proceedings of the International Conference on Computational Science-Part I
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper deals with computations of sensitivity indices in global sensitivity analysis. Given a model y=f(x1,...,xk), where the k input factors xi's are uncorrelated with one another, one can see y as the realisation of a stochastic process obtained by sampling each of the xi's from its marginal distribution. The sensitivity indices are related to the decomposition of the variance of y into terms either due to each xi taken singularly, as well as into terms due to the cooperative effects of more than one. When the complete decomposition is considered, the number of sensitivity indices to compute is (2k-1), making the computational cost grow exponentially with k. This has been referred to as the curse of dimensionality and makes the complete decomposition unfeasible in most practical applications. In this paper we show that the information contained in the samples used to compute suitably defined subsets A of the (2k-1) indices can be used to compute the complementary subsets A* of indices, at no additional cost. This property allows reducing significantly the growth of the computational costs as k increases.