Multiplicative updates for L1-regularized linear and logistic regression

  • Authors:
  • Fei Sha;Y. Albert Park;Lawrence K. Saul

  • Affiliations:
  • Computer Science Division, University of California, Berkeley, CA;Department of Computer Science and Engineering, UC San Diego, La Jolla, CA;Department of Computer Science and Engineering, UC San Diego, La Jolla, CA

  • Venue:
  • IDA'07 Proceedings of the 7th international conference on Intelligent data analysis
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multiplicative update rules have proven useful in many areas of machine learning. Simple to implement, guaranteed to converge, they account in part for the widespread popularity of algorithms such as nonnegative matrix factorization and Expectation-Maximization. In this paper, we show how to derive multiplicative updates for problems in L1-regularized linear and logistic regression. For L1-regularized linear regression, the updates are derived by reformulating the required optimization as a problem in nonnegative quadratic programming (NQP). The dual of this problem, itself an instance of NQP, can also be solved using multiplicative updates; moreover, the observed duality gap can be used to bound the error of intermediate solutions. For L1-regularized logistic regression, we derive similar updates using an iteratively reweighted least squares approach. We present illustrative experimental results and describe efficient implementations for large-scale problems of interest (e.g., with tens of thousands of examples and over one million features).