Parallel and distributed computation: numerical methods
Parallel and distributed computation: numerical methods
On the limited memory BFGS method for large scale optimization
Mathematical Programming: Series A and B
Sparse Approximate Solutions to Linear Systems
SIAM Journal on Computing
Smooth minimization of non-smooth functions
Mathematical Programming: Series A and B
Learning with matrix factorizations
Learning with matrix factorizations
Exact Regularization of Convex Programs
SIAM Journal on Optimization
Fixed-Point Continuation for $\ell_1$-Minimization: Methodology and Convergence
SIAM Journal on Optimization
Probing the Pareto Frontier for Basis Pursuit Solutions
SIAM Journal on Scientific Computing
Bregman Iterative Algorithms for $\ell_1$-Minimization with Applications to Compressed Sensing
SIAM Journal on Imaging Sciences
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
Exact Matrix Completion via Convex Optimization
Foundations of Computational Mathematics
The power of convex relaxation: near-optimal matrix completion
IEEE Transactions on Information Theory
A Singular Value Thresholding Algorithm for Matrix Completion
SIAM Journal on Optimization
Convergence of Fixed-Point Continuation Algorithms for Matrix Rank Minimization
Foundations of Computational Mathematics
Fixed point and Bregman iterative methods for matrix rank minimization
Mathematical Programming: Series A and B
Analysis and Generalizations of the Linearized Bregman Method
SIAM Journal on Imaging Sciences
A Simpler Approach to Matrix Completion
The Journal of Machine Learning Research
An implementable proximal point algorithmic framework for nuclear norm minimization
Mathematical Programming: Series A and B
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Recovering Low-Rank Matrices From Few Coefficients in Any Basis
IEEE Transactions on Information Theory
Hi-index | 0.00 |
In this paper, we propose and analyze an accelerated linearized Bregman (ALB) method for solving the basis pursuit and related sparse optimization problems. This accelerated algorithm is based on the fact that the linearized Bregman (LB) algorithm first proposed by Stanley Osher and his collaborators is equivalent to a gradient descent method applied to a certain dual formulation. We show that the LB method requires O(1/驴) iterations to obtain an 驴-optimal solution and the ALB algorithm reduces this iteration complexity to $O(1/\sqrt{\epsilon})$ while requiring almost the same computational effort on each iteration. Numerical results on compressed sensing and matrix completion problems are presented that demonstrate that the ALB method can be significantly faster than the LB method.