Minimization methods for non-differentiable functions
Minimization methods for non-differentiable functions
Theory of linear and integer programming
Theory of linear and integer programming
Algorithms for vertical and orthogonal L1 linear approximation of points
SCG '88 Proceedings of the fourth annual symposium on Computational geometry
Las Vegas algorithms for linear and integer programming when the dimension is small
Journal of the ACM (JACM)
Approximate clustering via core-sets
STOC '02 Proceedings of the thiry-fourth annual ACM symposium on Theory of computing
Projective clustering in high dimensions using core-sets
Proceedings of the eighteenth annual symposium on Computational geometry
Incremental Subgradient Methods for Nondifferentiable Optimization
SIAM Journal on Optimization
Fast Monte-Carlo Algorithms for Approximate Matrix Multiplication
FOCS '01 Proceedings of the 42nd IEEE symposium on Foundations of Computer Science
Concentration inequalities for the missing mass and for histogram rule error
The Journal of Machine Learning Research
Approximating extent measures of points
Journal of the ACM (JACM)
Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication
SIAM Journal on Computing
Agnostically Learning Halfspaces
FOCS '05 Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science
Sampling algorithms for l2 regression and applications
SODA '06 Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm
How to get close to the median shape
Proceedings of the twenty-second annual symposium on Computational geometry
How to get close to the median shape
Computational Geometry: Theory and Applications - Special issue on the 21st European workshop on computational geometry (EWCG 2005)
Detecting performance anomalies in global applications
WORLDS'05 Proceedings of the 2nd conference on Real, Large Distributed Systems - Volume 2
Efficient subspace approximation algorithms
SODA '07 Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms
Sampling algorithms and coresets for ℓp regression
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
Proceedings of the forty-first annual ACM symposium on Theory of computing
Fast Manhattan sketches in data streams
Proceedings of the twenty-ninth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems
A unified framework for approximating and clustering data
Proceedings of the forty-third annual ACM symposium on Theory of computing
Subspace embeddings for the L1-norm with applications
Proceedings of the forty-third annual ACM symposium on Theory of computing
A near-linear algorithm for projective clustering integer points
Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms
Algorithms and hardness for subspace approximation
Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms
Proceedings of the forty-fifth annual ACM symposium on Theory of computing
Learning Big (Image) Data via Coresets for Dictionaries
Journal of Mathematical Imaging and Vision
Hi-index | 0.00 |
Given an n × d matrix A and an n-vector b, the l1 regression problem is to find the vector x minimizing the objective function ||Ax - b||1, where ||y||1 ≡ Σi|yi| for vector y. This paper gives an algorithm needing O(n log n)dO(1) time in the worst case to obtain an approximate solution, with objective function value within a fixed ratio of optimum. Given ∈ 0, a solution whose value is within 1 + ≡ of optimum can be obtained either by a deterministic algorithm using an additional O(n)(d/∈)o(1)) time, or by a Monte Carlo algorithm using an additional O((d/∈)O(1)) time. The analysis of the randomized algorithm shows that weighted coresets exist for l1 regression. The algorithms use the ellipsoid method, gradient descent, and random sampling.