Mathematical Programming: Series A and B
A proximal-based decomposition method for convex minimization problems
Mathematical Programming: Series A and B
Convex Optimization
The Journal of Machine Learning Research
First-Order Methods for Sparse Covariance Selection
SIAM Journal on Matrix Analysis and Applications
Smooth Optimization Approach for Sparse Covariance Selection
SIAM Journal on Optimization
Regularization Methods for Semidefinite Programming
SIAM Journal on Optimization
The Split Bregman Method for L1-Regularized Problems
SIAM Journal on Imaging Sciences
A New Alternating Minimization Algorithm for Total Variation Image Reconstruction
SIAM Journal on Imaging Sciences
High Dimensional Inverse Covariance Matrix Estimation via Linear Programming
The Journal of Machine Learning Research
Solving Log-Determinant Optimization Problems by a Newton-CG Primal Proximal Point Algorithm
SIAM Journal on Optimization
Alternating Direction Algorithms for $\ell_1$-Problems in Compressive Sensing
SIAM Journal on Scientific Computing
Recovering Low-Rank and Sparse Components of Matrices from Incomplete and Noisy Observations
SIAM Journal on Optimization
Foundations and Trends® in Machine Learning
Alternating Direction Method for Covariance Selection Models
Journal of Scientific Computing
Robust subspace discovery via relaxed rank minimization
Neural Computation
Hi-index | 0.00 |
Chandrasekaran, Parrilo, and Willsky 2012 proposed a convex optimization problem for graphical model selection in the presence of unobserved variables. This convex optimization problem aims to estimate an inverse covariance matrix that can be decomposed into a sparse matrix minus a low-rank matrix from sample data. Solving this convex optimization problem is very challenging, especially for large problems. In this letter, we propose two alternating direction methods for solving this problem. The first method is to apply the classic alternating direction method of multipliers to solve the problem as a consensus problem. The second method is a proximal gradient-based alternating-direction method of multipliers. Our methods take advantage of the special structure of the problem and thus can solve large problems very efficiently. A global convergence result is established for the proposed methods. Numerical results on both synthetic data and gene expression data show that our methods usually solve problems with 1 million variables in 1 to 2 minutes and are usually 5 to 35ï戮 times faster than a state-of-the-art Newton-CG proximal point algorithm.