On the convergence of the coordinate descent method for convex differentiable minimization
Journal of Optimization Theory and Applications
Determinant Maximization with Linear Matrix Inequality Constraints
SIAM Journal on Matrix Analysis and Applications
Sparse graphical models for exploring gene expression data
Journal of Multivariate Analysis
Smooth minimization of non-smooth functions
Mathematical Programming: Series A and B
Modeling changing dependency structure in multivariate time series
Proceedings of the 24th international conference on Machine learning
Exploiting sparse Markov and covariance structure in multiresolution models
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Sparse Gaussian graphical models with unknown block structure
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
Learning graphical model structure using L1-regularization paths
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
Gaussian multiresolution models: exploiting sparse Markov and covariance structure
IEEE Transactions on Signal Processing
Structure learning with nonparametric decomposable models
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
The importance of dilution in the inference of biological networks
Allerton'09 Proceedings of the 47th annual Allerton conference on Communication, control, and computing
Inferring gene interaction networks from ISH images via kernelized graphical models
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VI
NP-MuScL: unsupervised global prediction of interaction networks from multiple data sources
RECOMB'13 Proceedings of the 17th international conference on Research in Computational Molecular Biology
Monitoring the covariance matrix with fewer observations than variables
Computational Statistics & Data Analysis
Hi-index | 0.00 |
We consider the problem of fitting a large-scale covariance matrix to multivariate Gaussian data in such a way that the inverse is sparse, thus providing model selection. Beginning with a dense empirical covariance matrix, we solve a maximum likelihood problem with an l1-norm penalty term added to encourage sparsity in the inverse. For models with tens of nodes, the resulting problem can be solved using standard interior-point algorithms for convex optimization, but these methods scale poorly with problem size. We present two new algorithms aimed at solving problems with a thousand nodes. The first, based on Nesterov's first-order algorithm, yields a rigorous complexity estimate for the problem, with a much better dependence on problem size than interior-point methods. Our second algorithm uses block coordinate descent, updating row/columns of the covariance matrix sequentially. Experiments with genomic data show that our method is able to uncover biologically interpretable connections among genes.