Spectral methods for parameterized matrix equations

  • Authors:
  • Gianluca Iaccarino;Paul G. Constantine

  • Affiliations:
  • Stanford University;Stanford University

  • Venue:
  • Spectral methods for parameterized matrix equations
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this age of parallel, high performance computing, simulation of complex physical systems for engineering computations have become routine. However, the question arises at the end of such a computation: How much can we trust these results? Determining appropriate measures of confidence falls in the realm of uncertainty quantification, where the goal is to quantify the variability in the output of a physical model given uncertainty in the model inputs. The computational procedures for computing these measures often reduce to solving an appropriate matrix equation whose inputs depend on a set of parameters. In this work, we present the problem of solving a system of linear equations where the coefficient matrix and the right hand side are parameterized by a set of independent variables. Each parameterizing variable has its own range, and we assume a separable weight function on the product space induced by the variables. By assuming that the elements of the matrix and right hand side depend analytically on the parameters (i.e. can be represented in a power series expansion) and by requiring that the matrix be non-singular for every parameter value, we ensure the existence of a unique solution. We present a class of multivariate polynomial approximation methods—known in numerical PDE communities as spectral methods—for approximating the vector-valued function that satisfies the parameterized system of equations at each point in the parameter space. These approximations converge rapidly to the true solution in a mean-squared sense as the degree of polynomial increases, and they provide flexible and robust methods for computation. We derive rigorous asymptotic error estimates for a spectral Galerkin and an interpolating pseudospectral method, as well as a more practical a posteriori residual error estimate. We explore the fascinating connections between these two classes of methods yielding conditions under which both give the same approximation. Where the methods differ, we provide insightful perspectives into the discrepancies based on the symmetric, tridiagonal Jacobi matrices associated with the weight function. Unfortunately, the standard multivariate spectral methods suffer from the so-called curse of dimensionality, i.e. as the number of parameters increases, the work required to compute the approximation increases exponentially. To combat this curse, we exploit the flexibility of the choice of multivariate basis polynomials in a Galerkin framework to construct an efficient representation that takes advantage of any anisotropic dependence the solution may exhibit with respect to different parameters. Despite the savings from the anisotropic approximation, the size of systems to be solved for the approximation remains daunting. We therefore offer strategies for large-scale problems based on a unique factorization of the Galerkin system matrix. The factorization allows for straightforward implementation of the method, as well as surprisingly useful tools for further analysis. To complement the analysis, we demonstrate the power and efficiency of the spectral methods with a series of examples including a univariate example from a PageRank model for ranking nodes in a graph and two variants of a conjugate heat transfer problem with uncertain flow conditions.