A Krylov Subspace Method for Covariance Approximation and Simulation of Random Processes and Fields
Multidimensional Systems and Signal Processing
Convex variational Bayesian inference for large scale generalized linear models
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Generalized consistent estimation on low-rank Krylov subspaces of arbitrarily high dimension
IEEE Transactions on Signal Processing
Detection-estimation of multi-rank Gaussian sources using expected likelihood
Digital Signal Processing
Large Scale Bayesian Inference and Experimental Design for Sparse Linear Models
SIAM Journal on Imaging Sciences
Hi-index | 0.01 |
Computing the linear least-squares estimate of a high-dimensional random quantity given noisy data requires solving a large system of linear equations. In many situations, one can solve this system efficiently using a Krylov subspace method, such as the conjugate gradient (CG) algorithm. Computing the estimation error variances is a more intricate task. It is difficult because the error variances are the diagonal elements of a matrix expression involving the inverse of a given matrix. This paper presents a method for using the conjugate search directions generated by the CG algorithm to obtain a convergent approximation to the estimation error variances. The algorithm for computing the error variances falls out naturally from a new estimation-theoretic interpretation of the CG algorithm. This paper discusses this interpretation and convergence issues and presents numerical examples. The examples include a 105-dimensional estimation problem from oceanography.