The Johnson-Lindenstrauss Lemma and the sphericity of some graphs
Journal of Combinatorial Theory Series A
Approximate nearest neighbors: towards removing the curse of dimensionality
STOC '98 Proceedings of the thirtieth annual ACM symposium on Theory of computing
An elementary proof of a theorem of Johnson and Lindenstrauss
Random Structures & Algorithms
Database-friendly random projections: Johnson-Lindenstrauss with binary coins
Journal of Computer and System Sciences - Special issu on PODS 2001
Algorithmic Applications of Low-Distortion Geometric Embeddings
FOCS '01 Proceedings of the 42nd IEEE symposium on Foundations of Computer Science
The Cauchy-Schwarz Master Class: An Introduction to the Art of Mathematical Inequalities
The Cauchy-Schwarz Master Class: An Introduction to the Art of Mathematical Inequalities
Improved Approximation Algorithms for Large Matrices via Random Projections
FOCS '06 Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science
Numerische Mathematik
On variants of the Johnson–Lindenstrauss lemma
Random Structures & Algorithms
Large-Scale Parallel Collaborative Filtering for the Netflix Prize
AAIM '08 Proceedings of the 4th international conference on Algorithmic Aspects in Information and Management
Near-Optimal Sparse Recovery in the L1 Norm
FOCS '08 Proceedings of the 2008 49th Annual IEEE Symposium on Foundations of Computer Science
Perturbed identity matrices have high rank: Proof and applications
Combinatorics, Probability and Computing
Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit
Foundations of Computational Mathematics
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Feature hashing for large scale multitask learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
The Fast Johnson-Lindenstrauss Transform and Approximate Nearest Neighbors
SIAM Journal on Computing
Fast Dimension Reduction Using Rademacher Series on Dual BCH Codes
Discrete & Computational Geometry
Sequential sparse matching pursuit
Allerton'09 Proceedings of the 47th annual Allerton conference on Communication, control, and computing
A sparse Johnson: Lindenstrauss transform
Proceedings of the forty-second ACM symposium on Theory of computing
Lower bounds for sparse recovery
SODA '10 Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms
Sparse graph codes for compression, sensing, and secrecy
Sparse graph codes for compression, sensing, and secrecy
Sparser Johnson-Lindenstrauss transforms
Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms
An almost optimal unrestricted fast Johnson-Lindenstrauss transform
Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms
A fast random sampling algorithm for sparsifying matrices
APPROX'06/RANDOM'06 Proceedings of the 9th international conference on Approximation Algorithms for Combinatorial Optimization Problems, and 10th international conference on Randomization and Computation
Decoding by linear programming
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
IEEE Transactions on Information Theory
Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Hard Thresholding Pursuit: An Algorithm for Compressive Sensing
SIAM Journal on Numerical Analysis
Low rank approximation and regression in input sparsity time
Proceedings of the forty-fifth annual ACM symposium on Theory of computing
Proceedings of the forty-fifth annual ACM symposium on Theory of computing
Sparser Johnson-Lindenstrauss Transforms
Journal of the ACM (JACM)
Hi-index | 0.00 |
We give near-tight lower bounds for the sparsity required in several dimensionality reducing linear maps. First, consider the Johnson-Lindenstrauss (JL) lemma which states that for any set of n vectors in Rd there is an A∈Rm x d with m = O(ε-2log n) such that mapping by A preserves the pairwise Euclidean distances up to a 1 pm ε factor. We show there exists a set of n vectors such that any such A with at most s non-zero entries per column must have s = Ω(ε-1log n/log(1/ε)) if m -2, ε-1√(logm d)) by [Dasgupta-Kumar-Sarlos, STOC 2010], which only held against the stronger property of distributional JL, and only against a certain restricted class of distributions. Meanwhile our lower bound is against the JL lemma itself, with no restrictions. Our lower bound matches the sparse JL upper bound of [Kane-Nelson, SODA 2012] up to an O(log(1/ε)) factor. Next, we show that any m x n matrix with the k-restricted isometry property (RIP) with constant distortion must have Ω(k log(n/k)) non-zeroes per column if m=O(k log (n/k)), the optimal number of rows for RIP, and k n. This improves the previous lower bound of Ω(min{k, n/m}) by [Chandar, 2010] and shows that for most k it is impossible to have a sparse RIP matrix with an optimal number of rows. Both lower bounds above also offer a tradeoff between sparsity and the number of rows. Lastly, we show that any oblivious distribution over subspace embedding matrices with 1 non-zero per column and preserving distances in a d dimensional-subspace up to a constant factor must have at least Ω(d2) rows. This matches an upper bound in [Nelson-Nguyên, arXiv abs/1211.1002] and shows the impossibility of obtaining the best of both of constructions in that work, namely 1 non-zero per column and d ⋅ polylog d rows.