Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Variational Extensions to EM and Multinomial PCA
ECML '02 Proceedings of the 13th European Conference on Machine Learning
The Journal of Machine Learning Research
Convex Optimization
A Wavelet Tour of Signal Processing, Third Edition: The Sparse Way
A Wavelet Tour of Signal Processing, Third Edition: The Sparse Way
Learning with structured sparsity
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Group lasso with overlap and graph lasso
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
MedLDA: maximum margin supervised topic models for regression and classification
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Sparse reconstruction by separable approximation
IEEE Transactions on Signal Processing
Exploiting structure in wavelet-based Bayesian compressive sensing
IEEE Transactions on Signal Processing
On the reconstruction of block-sparse signals with an optimal number of measurements
IEEE Transactions on Signal Processing
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
Learning Deep Architectures for AI
Foundations and Trends® in Machine Learning
The nested chinese restaurant process and bayesian nonparametric inference of topic hierarchies
Journal of the ACM (JACM)
IEEE Transactions on Information Theory
Efficient Online and Batch Learning Using Forward Backward Splitting
The Journal of Machine Learning Research
Online Learning for Matrix Factorization and Sparse Coding
The Journal of Machine Learning Research
Model-based compressive sensing
IEEE Transactions on Information Theory
Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization
The Journal of Machine Learning Research
The Journal of Machine Learning Research
-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
IEEE Transactions on Signal Processing
Wavelet-based statistical signal processing using hidden Markovmodels
IEEE Transactions on Signal Processing
Greed is good: algorithmic results for sparse approximation
IEEE Transactions on Information Theory
Just relax: convex programming methods for identifying sparse signals in noise
IEEE Transactions on Information Theory
Image Denoising Via Sparse and Redundant Representations Over Learned Dictionaries
IEEE Transactions on Image Processing
Convex and Network Flow Optimization for Structured Sparsity
The Journal of Machine Learning Research
Structured Variable Selection with Sparsity-Inducing Norms
The Journal of Machine Learning Research
Optimization with Sparsity-Inducing Penalties
Foundations and Trends® in Machine Learning
Supervised dictionary learning for music genre classification
Proceedings of the 2nd ACM International Conference on Multimedia Retrieval
Human behavior understanding for robotics
HBU'12 Proceedings of the Third international conference on Human Behavior Understanding
Dictionary learning for image prediction
Journal of Visual Communication and Image Representation
Active learning via neighborhood reconstruction
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Proceedings of the 7th ACM international conference on Web search and data mining
Multiview Hessian discriminative sparse coding for image annotation
Computer Vision and Image Understanding
Hi-index | 0.00 |
Sparse coding consists in representing signals as sparse linear combinations of atoms selected from a dictionary. We consider an extension of this framework where the atoms are further assumed to be embedded in a tree. This is achieved using a recently introduced tree-structured sparse regularization norm, which has proven useful in several applications. This norm leads to regularized problems that are difficult to optimize, and in this paper, we propose efficient algorithms for solving them. More precisely, we show that the proximal operator associated with this norm is computable exactly via a dual approach that can be viewed as the composition of elementary proximal operators. Our procedure has a complexity linear, or close to linear, in the number of atoms, and allows the use of accelerated gradient techniques to solve the tree-structured sparse approximation problem at the same computational cost as traditional ones using the l1-norm. Our method is efficient and scales gracefully to millions of variables, which we illustrate in two types of applications: first, we consider fixed hierarchical dictionaries of wavelets to denoise natural images. Then, we apply our optimization tools in the context of dictionary learning, where learned dictionary elements naturally self-organize in a prespecified arborescent structure, leading to better performance in reconstruction of natural image patches. When applied to text documents, our method learns hierarchies of topics, thus providing a competitive alternative to probabilistic topic models.