Computational Methods for Inverse Problems
Computational Methods for Inverse Problems
Appearance based object modeling using texture database: acquisition, compression and rendering
EGRW '02 Proceedings of the 13th Eurographics workshop on Rendering
Convergence Properties of the Nelder--Mead Simplex Method in Low Dimensions
SIAM Journal on Optimization
Bayesian parameter estimation via variational methods
Statistics and Computing
Introduction to Stochastic Search and Optimization
Introduction to Stochastic Search and Optimization
Inverse Problem Theory and Methods for Model Parameter Estimation
Inverse Problem Theory and Methods for Model Parameter Estimation
Algorithms for Numerical Analysis in High Dimensions
SIAM Journal on Scientific Computing
An adaptive multi-element generalized polynomial chaos method for stochastic differential equations
Journal of Computational Physics
Statistics and Computing
Dimensionality reduction and polynomial chaos acceleration of Bayesian inference in inverse problems
Journal of Computational Physics
Multivariate Regression and Machine Learning with Sums of Separable Functions
SIAM Journal on Scientific Computing
Tensor Decompositions and Applications
SIAM Review
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
Discrete Inverse Problems: Insight and Algorithms
Discrete Inverse Problems: Insight and Algorithms
Learning to Predict Physical Properties using Sums of Separable Functions
SIAM Journal on Scientific Computing
Hi-index | 31.45 |
This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.