Some large-scale matrix computation problems
Journal of Computational and Applied Mathematics - Special issue on TICAM symposium
Parallel Preconditioning with Sparse Approximate Inverses
SIAM Journal on Scientific Computing
A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs
SIAM Journal on Scientific Computing
Modifying a Sparse Cholesky Factorization
SIAM Journal on Matrix Analysis and Applications
Iterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems
Gaussian Markov Random Fields: Theory And Applications (Monographs on Statistics and Applied Probability)
An estimator for the diagonal of a matrix
Applied Numerical Mathematics
Functions of Matrices: Theory and Computation (Other Titles in Applied Mathematics)
Functions of Matrices: Theory and Computation (Other Titles in Applied Mathematics)
Algorithm 887: CHOLMOD, Supernodal Sparse Cholesky Factorization and Update/Downdate
ACM Transactions on Mathematical Software (TOMS)
Computing $A^\alpha, \log(A)$, and Related Matrix Functions by Contour Integrals
SIAM Journal on Numerical Analysis
A Matrix-free Approach for Solving the Parametric Gaussian Process Maximum Likelihood Problem
SIAM Journal on Scientific Computing
Iterative numerical methods for sampling from high dimensional Gaussian distributions
Statistics and Computing
Hi-index | 0.00 |
In order to compute the log-likelihood for high dimensional Gaussian models, it is necessary to compute the determinant of the large, sparse, symmetric positive definite precision matrix. Traditional methods for evaluating the log-likelihood, which are typically based on Cholesky factorisations, are not feasible for very large models due to the massive memory requirements. We present a novel approach for evaluating such likelihoods that only requires the computation of matrix-vector products. In this approach we utilise matrix functions, Krylov subspaces, and probing vectors to construct an iterative numerical method for computing the log-likelihood.