Methods of L1-estimation of a covariance matrix
Computational Statistics & Data Analysis - Special issue on statistical data analysis based on the L:0I1:0E norm and relate
Approximate nearest neighbors: towards removing the curse of dimensionality
STOC '98 Proceedings of the thirtieth annual ACM symposium on Theory of computing
The space complexity of approximating the frequency moments
Journal of Computer and System Sciences
Dimension Reduction in the \ell _1 Norm
FOCS '02 Proceedings of the 43rd Symposium on Foundations of Computer Science
Algorithmic Applications of Low-Distortion Geometric Embeddings
FOCS '01 Proceedings of the 42nd IEEE symposium on Foundations of Computer Science
Tabulation based 4-universal hashing with applications to second moment estimation
SODA '04 Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms
Optimal approximations of the frequency moments of data streams
Proceedings of the thirty-seventh annual ACM symposium on Theory of computing
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Subgradient and sampling algorithms for l1 regression
SODA '05 Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms
On the impossibility of dimension reduction in l1
Journal of the ACM (JACM)
Stable distributions, pseudorandom generators, embeddings, and data stream computation
Journal of the ACM (JACM)
Data streams: algorithms and applications
Foundations and Trends® in Theoretical Computer Science
Improved Approximation Algorithms for Large Matrices via Random Projections
FOCS '06 Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science
Graph sparsification by effective resistances
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
Tighter bounds for random projections of manifolds
Proceedings of the twenty-fourth annual symposium on Computational geometry
Principal Component Analysis Based on L1-Norm Maximization
IEEE Transactions on Pattern Analysis and Machine Intelligence
Numerical linear algebra in the streaming model
Proceedings of the forty-first annual ACM symposium on Theory of computing
Sampling Algorithms and Coresets for $\ell_p$ Regression
SIAM Journal on Computing
The Fast Johnson-Lindenstrauss Transform and Approximate Nearest Neighbors
SIAM Journal on Computing
Linear Regression Analysis: Theory and Computing
Linear Regression Analysis: Theory and Computing
A Randomized Algorithm for Principal Component Analysis
SIAM Journal on Matrix Analysis and Applications
Efficient Sketches for Earth-Mover Distance, with Applications
FOCS '09 Proceedings of the 2009 50th Annual IEEE Symposium on Foundations of Computer Science
Coresets and sketches for high dimensional subspace approximation problems
SODA '10 Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms
On the exact space complexity of sketching and streaming small norms
SODA '10 Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms
Faster least squares approximation
Numerische Mathematik
Practical global optimization for multiview geometry
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
An almost optimal unrestricted fast Johnson-Lindenstrauss transform
Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
IEEE Transactions on Information Theory
Efficient point-to-subspace query in ℓ1 with application to robust face recognition
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
Proceedings of the forty-fifth annual ACM symposium on Theory of computing
Hi-index | 0.00 |
We show there is a distribution over linear mappings R:l1n - l1O(d log d), such that with arbitrarily large constant probability, for any fixed d-dimensional subspace L, for all x ∈ L we have |x|1 ≤ |Rx|1 = O(d log d)|x|1. This provides the first analogue of the ubiquitous subspace Johnson-Lindenstrauss embedding for the l1-norm. Importantly, the target dimension and distortion are independent of the ambient dimension n. We give several applications of this result. First, we give a faster algorithm for computing well-conditioned bases. Our algorithm is simple, avoiding the linear programming machinery required of previous algorithms. We also give faster algorithms for least absolute deviation regression and l1-norm best fit hyperplane problems, as well as the first single pass streaming algorithms with low space for these problems. These results are motivated by practical problems in image analysis, spam detection, and tatistics, where the l1-norm is used in studies where outliers may be safely and effectively ignored. This is because the l1-norm is more robust to outliers than the l2-norm.