An efficient projection for l1, ∞ regularization

  • Authors:
  • Ariadna Quattoni;Xavier Carreras;Michael Collins;Trevor Darrell

  • Affiliations:
  • Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA and UC Berkeley EECS and ICSI, Berkeley, CA;Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA;Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA;UC Berkeley EECS and ICSI, Berkeley, CA

  • Venue:
  • ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In recent years the l1, ∞ norm has been proposed for joint regularization. In essence, this type of regularization aims at extending the l1 framework for learning sparse models to a setting where the goal is to learn a set of jointly sparse models. In this paper we derive a simple and effective projected gradient method for optimization of l1, ∞ regularized problems. The main challenge in developing such a method resides on being able to compute efficient projections to the l1, ∞ ball. We present an algorithm that works in O(n log n) time and O(n) memory where n is the number of parameters. We test our algorithm in a multi-task image annotation problem. Our results show that l1, ∞ leads to better performance than both l2 and l1 regularization and that it is is effective in discovering jointly sparse solutions.