l1 regularization in infinite dimensional feature spaces

  • Authors:
  • Saharon Rosset;Grzegorz Swirszcz;Nathan Srebro;Ji Zhu

  • Affiliations:
  • IBM T.J. Watson Research Center, Yorktown Heights, NY;IBM T.J. Watson Research Center, Yorktown Heights, NY;IBM Haifa Research Lab, Haifa, Israel and Toyota Technological Institute, Chicago, IL;University of Michigan, Ann Arbor, MI

  • Venue:
  • COLT'07 Proceedings of the 20th annual conference on Learning theory
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we discuss the problem of fitting l1 regularized prediction models in infinite (possibly non-countable) dimensional feature spaces. Our main contributions are: a. Deriving a generalization of l1 regularization based on measures which can be applied in non-countable feature spaces; b. Proving that the sparsity property of l1 regularization is maintained in infinite dimensions; c. Devising a path-following algorithm that can generate the set of regularized solutions in "nice" feature spaces; and d. Presenting an example of penalized spline models where this path following algorithm is computationally feasible, and gives encouraging empirical results.