Subgradient and sampling algorithms for l1 regression

  • Authors:
  • Kenneth L. Clarkson

  • Affiliations:
  • Bell Labs/ Murray Hill, New Jersey

  • Venue:
  • SODA '05 Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Given an n × d matrix A and an n-vector b, the l1 regression problem is to find the vector x minimizing the objective function ||Ax - b||1, where ||y||1 ≡ Σi|yi| for vector y. This paper gives an algorithm needing O(n log n)dO(1) time in the worst case to obtain an approximate solution, with objective function value within a fixed ratio of optimum. Given ∈ 0, a solution whose value is within 1 + ≡ of optimum can be obtained either by a deterministic algorithm using an additional O(n)(d/∈)o(1)) time, or by a Monte Carlo algorithm using an additional O((d/∈)O(1)) time. The analysis of the randomized algorithm shows that weighted coresets exist for l1 regression. The algorithms use the ellipsoid method, gradient descent, and random sampling.