Direct methods for sparse matrices
Direct methods for sparse matrices
On the limited memory BFGS method for large scale optimization
Mathematical Programming: Series A and B
A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs
SIAM Journal on Scientific Computing
On computing certain elements of the inverse of a sparse matrix
Communications of the ACM
A survey of truncated-Newton methods
Journal of Computational and Applied Mathematics - Special issue on numerical analysis 2000 Vol. IV: optimization and nonlinear equations
Computer Solution of Large Sparse Positive Definite
Computer Solution of Large Sparse Positive Definite
The Journal of Machine Learning Research
Covariance selection for nonchordal graphs via chordal embedding
Optimization Methods & Software - Mathematical programming in data mining and machine learning
On the Consistency of Feature Selection using Greedy Least Squares Regression
The Journal of Machine Learning Research
Hi-index | 0.03 |
In this paper we consider some methods for the maximum likelihood estimation of sparse Gaussian graphical (covariance selection) models when the number of variables is very large (tens of thousands or more). We present a procedure for determining the pattern of zeros in the model and we discuss the use of limited memory quasi-Newton algorithms and truncated Newton algorithms to fit the model by maximum likelihood. We present efficient ways of computing the gradients and likelihood function values for such models suitable for a desktop computer. For the truncated Newton method we also present an efficient way of computing the action of the Hessian matrix on an arbitrary vector which does not require the computation and storage of the Hessian matrix. The methods are illustrated and compared on simulated data and applied to a real microarray data set. The limited memory quasi-Newton method is recommended for practical use.