Knowledge Spaces
Morphological Image Analysis: Principles and Applications
Morphological Image Analysis: Principles and Applications
Convex Optimization
Boosting as a Regularized Path to a Maximum Margin Classifier
The Journal of Machine Learning Research
Learning the Kernel Function via Regularization
The Journal of Machine Learning Research
On Model Selection Consistency of Lasso
The Journal of Machine Learning Research
Classification of arrayCGH data using fused SVM
Bioinformatics
Bolasso: model consistent Lasso estimation through the bootstrap
Proceedings of the 25th international conference on Machine learning
The Group-Lasso for generalized linear models: uniqueness of solutions and efficient algorithms
Proceedings of the 25th international conference on Machine learning
Consistency of the Group Lasso and Multiple Kernel Learning
The Journal of Machine Learning Research
Convex multi-task feature learning
Machine Learning
Learning with structured sparsity
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Group lasso with overlap and graph lasso
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Exploiting structure in wavelet-based Bayesian compressive sensing
IEEE Transactions on Signal Processing
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
IEEE Transactions on Information Theory
Improving M/EEG source localization with an inter-condition sparse prior
ISBI'09 Proceedings of the Sixth IEEE international conference on Symposium on Biomedical Imaging: From Nano to Macro
Joint covariate selection and joint subspace selection for multiple classification problems
Statistics and Computing
Online Learning for Matrix Factorization and Sparse Coding
The Journal of Machine Learning Research
Model-based compressive sensing
IEEE Transactions on Information Theory
The Journal of Machine Learning Research
Multi-scale Mining of fMRI Data with Hierarchical Structured Sparsity
PRNI '11 Proceedings of the 2011 IEEE International Workshop on Pattern Recognition in NeuroImaging
Proximal Methods for Hierarchical Sparse Coding
The Journal of Machine Learning Research
Convex and Network Flow Optimization for Structured Sparsity
The Journal of Machine Learning Research
Structured sparsity in structured prediction
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Human detection using oriented histograms of flow and appearance
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part II
Recovery of exact sparse representations in the presence of bounded noise
IEEE Transactions on Information Theory
Super-resolution with sparse mixing estimators
IEEE Transactions on Image Processing
Extracting non-negative basis images using pixel dispersion penalty
Pattern Recognition
Optimization with Sparsity-Inducing Penalties
Foundations and Trends® in Machine Learning
Structured sparsity and generalization
The Journal of Machine Learning Research
Structured sparsity via alternating direction methods
The Journal of Machine Learning Research
Robust and practical face recognition via structured sparsity
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
Sparse methods for biomedical data
ACM SIGKDD Explorations Newsletter
Regularizers for structured sparsity
Advances in Computational Mathematics
Supervised feature selection in graphs with path coding penalties and network flows
The Journal of Machine Learning Research
Supervised feature subset selection with ordinal optimization
Knowledge-Based Systems
Feature selection with SVD entropy: Some modification and extension
Information Sciences: an International Journal
Hi-index | 0.00 |
We consider the empirical risk minimization problem for linear supervised learning, with regularization by structured sparsity-inducing norms. These are defined as sums of Euclidean norms on certain subsets of variables, extending the usual l1-norm and the group l1-norm by allowing the subsets to overlap. This leads to a specific set of allowed nonzero patterns for the solutions of such problems. We first explore the relationship between the groups defining the norm and the resulting nonzero patterns, providing both forward and backward algorithms to go back and forth from groups to patterns. This allows the design of norms adapted to specific prior knowledge expressed in terms of nonzero patterns. We also present an efficient active set algorithm, and analyze the consistency of variable selection for least-squares linear regression in low and high-dimensional settings.