Mathematical Programming: Series A and B
Convergence of a block coordinate descent method for nondifferentiable minimization
Journal of Optimization Theory and Applications
Iterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems
Split Bregman Algorithm, Douglas-Rachford Splitting and Frame Shrinkage
SSVM '09 Proceedings of the Second International Conference on Scale Space and Variational Methods in Computer Vision
Bregman Iterative Algorithms for $\ell_1$-Minimization with Applications to Compressed Sensing
SIAM Journal on Imaging Sciences
The Split Bregman Method for L1-Regularized Problems
SIAM Journal on Imaging Sciences
A Unified Primal-Dual Algorithm Framework Based on Bregman Iteration
Journal of Scientific Computing
Bregmanized Nonlocal Regularization for Deconvolution and Sparse Reconstruction
SIAM Journal on Imaging Sciences
SIAM Journal on Imaging Sciences
Computational Statistics & Data Analysis
Hi-index | 0.03 |
Ordering of regression or classification coefficients occurs in many real-world applications. Fused Lasso exploits this ordering by explicitly regularizing the differences between neighboring coefficients through an @?"1 norm regularizer. However, due to nonseparability and nonsmoothness of the regularization term, solving the fused Lasso problem is computationally demanding. Existing solvers can only deal with problems of small or medium size, or a special case of the fused Lasso problem in which the predictor matrix is the identity matrix. In this paper, we propose an iterative algorithm based on the split Bregman method to solve a class of large-scale fused Lasso problems, including a generalized fused Lasso and a fused Lasso support vector classifier. We derive our algorithm using an augmented Lagrangian method and prove its convergence properties. The performance of our method is tested on both artificial data and real-world applications including proteomic data from mass spectrometry and genomic data from array comparative genomic hybridization (array CGH). We demonstrate that our method is many times faster than the existing solvers, and show that it is especially efficient for large p, small n problems, where p is the number of variables and n is the number of samples.