Optimal and superoptimal circulant preconditioners
SIAM Journal on Matrix Analysis and Applications
Matrix computations (3rd ed.)
A Deflated Version of the Conjugate Gradient Algorithm
SIAM Journal on Scientific Computing
A Krylov Subspace Method for Covariance Approximation and Simulation of Random Processes and Fields
Multidimensional Systems and Signal Processing
Iterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems
Accurate conjugate gradient methods for families of shifted systems
Applied Numerical Mathematics - Numerical algorithms, parallelism and applications
Gaussian Markov Random Fields: Theory And Applications (Monographs on Statistics and Applied Probability)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Toeplitz And Circulant Matrices: A Review (Foundations and Trends(R) in Communications and Information Theory)
Functions of Matrices: Theory and Computation (Other Titles in Applied Mathematics)
Functions of Matrices: Theory and Computation (Other Titles in Applied Mathematics)
Computing $A^\alpha, \log(A)$, and Related Matrix Functions by Contour Integrals
SIAM Journal on Numerical Analysis
Implementing sparse matrix-vector multiplication on throughput-oriented processors
Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis
Parameter estimation in high dimensional Gaussian distributions
Statistics and Computing
Hi-index | 0.00 |
Many applications require efficient sampling from Gaussian distributions. The method of choice depends on the dimension of the problem as well as the structure of the covariance- (Σ) or precision matrix (Q). The most common black-box routine for computing a sample is based on Cholesky factorization. In high dimensions, computing the Cholesky factor of Σ or Q may be prohibitive due to accumulation of more non-zero entries in the factor than is possible to store in memory. We compare different methods for computing the samples iteratively adapting ideas from numerical linear algebra. These methods assume that matrix vector products, Qv, are fast to compute. We show that some of the methods are competitive and faster than Cholesky sampling and that a parallel version of one method on a Graphical Processing Unit (GPU) using CUDA can introduce a speed-up of up to 30x. Moreover, one method is used to sample from the posterior distribution of petroleum reservoir parameters in a North Sea field, given seismic reflection data on a large 3D grid.