Feature selection, L1 vs. L2 regularization, and rotational invariance
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Generalization Bounds for the Area Under the ROC Curve
The Journal of Machine Learning Research
Algorithms for simultaneous sparse approximation: part II: Convex relaxation
Signal Processing - Sparse approximations in signal and image processing
Scalable Recognition with a Vocabulary Tree
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
The Pyramid Match Kernel: Efficient Learning with Sets of Features
The Journal of Machine Learning Research
Pegasos: Primal Estimated sub-GrAdient SOlver for SVM
Proceedings of the 24th international conference on Machine learning
Efficient projections onto the l1-ball for learning in high dimensions
Proceedings of the 25th international conference on Machine learning
Transfer learning algorithms for image classification
Transfer learning algorithms for image classification
Efficient Online and Batch Learning Using Forward Backward Splitting
The Journal of Machine Learning Research
N-best reranking by multitask learning
WMT '10 Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR
The group-lasso: l1,∞regularization versus l1,2regularization
Proceedings of the 32nd DAGM conference on Pattern recognition
Variable Sparsity Kernel Learning
The Journal of Machine Learning Research
Discovering sociolinguistic associations with structured sparsity
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
Fast Projections onto l1,q-norm balls for grouped feature selection
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part III
Maximum margin ranking algorithms for information retrieval
ECIR'2010 Proceedings of the 32nd European conference on Advances in Information Retrieval
Structured sparsity in structured prediction
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
A latent variable ranking model for content-based retrieval
ECIR'12 Proceedings of the 34th European conference on Advances in Information Retrieval
Structural and topical dimensions in multi-task patent translation
EACL '12 Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics
Sparse methods for biomedical data
ACM SIGKDD Explorations Newsletter
Robust Visual Tracking via Structured Multi-Task Sparse Learning
International Journal of Computer Vision
Multi-Level structured image coding on high-dimensional image representation
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part II
Multi-source learning with block-wise missing data for Alzheimer's disease prediction
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Efficient online learning for multitask feature selection
ACM Transactions on Knowledge Discovery from Data (TKDD)
Sparse activity and sparse connectivity in supervised learning
The Journal of Machine Learning Research
Hi-index | 0.00 |
In recent years the l1, ∞ norm has been proposed for joint regularization. In essence, this type of regularization aims at extending the l1 framework for learning sparse models to a setting where the goal is to learn a set of jointly sparse models. In this paper we derive a simple and effective projected gradient method for optimization of l1, ∞ regularized problems. The main challenge in developing such a method resides on being able to compute efficient projections to the l1, ∞ ball. We present an algorithm that works in O(n log n) time and O(n) memory where n is the number of parameters. We test our algorithm in a multi-task image annotation problem. Our results show that l1, ∞ leads to better performance than both l2 and l1 regularization and that it is is effective in discovering jointly sparse solutions.