Matrix multiplication via arithmetic progressions
Journal of Symbolic Computation - Special issue on computational algebraic complexity
Information and Computation
Randomized algorithms
Approximating matrix multiplication for pattern recognition tasks
Journal of Algorithms
The space complexity of approximating the frequency moments
Journal of Computer and System Sciences
Polynomial Hash Functions Are Reliable (Extended Abstract)
ICALP '92 Proceedings of the 19th International Colloquium on Automata, Languages and Programming
Finding frequent items in data streams
Theoretical Computer Science - Special issue on automata, languages and programming
Fast sparse matrix multiplication
ACM Transactions on Algorithms (TALG)
Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication
SIAM Journal on Computing
Improved Approximation Algorithms for Large Matrices via Random Projections
FOCS '06 Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science
Declaring independence via the sketching of sketches
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
Faster join-projects and sparse matrix multiplications
Proceedings of the 12th International Conference on Database Theory
A note on compressed sensing and the complexity of matrix multiplication
Information Processing Letters
Numerical linear algebra in the streaming model
Proceedings of the forty-first annual ACM symposium on Theory of computing
Approximate sparse recovery: optimizing time and measurements
Proceedings of the forty-second ACM symposium on Theory of computing
Better size estimation for sparse matrix products
APPROX/RANDOM'10 Proceedings of the 13th international conference on Approximation, and 14 the International conference on Randomization, and combinatorial optimization: algorithms and techniques
The power of simple tabulation hashing
Proceedings of the forty-third annual ACM symposium on Theory of computing
Compressed matrix multiplication
Proceedings of the 3rd Innovations in Theoretical Computer Science Conference
Multiplying matrices faster than coppersmith-winograd
STOC '12 Proceedings of the forty-fourth annual ACM symposium on Theory of computing
IEEE Transactions on Information Theory - Part 1
Faster Algorithms for Rectangular Matrix Multiplication
FOCS '12 Proceedings of the 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science
Finding Correlations in Subquadratic Time, with Applications to Learning Parities and Juntas
FOCS '12 Proceedings of the 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science
Hi-index | 0.00 |
We present a simple algorithm that approximates the product of n-by-n real matrices A and B. Let ‖AB‖F denote the Frobenius norm of AB, and b be a parameter determining the time/accuracy trade-off. Given 2-wise independent hash functions h1,h2: [n]→ [b], and s1,s2: [n]→ {−1,+1} the algorithm works by first “compressing” the matrix product into the polynomial p(x) = ∑k=1n \left(∑i=1n Aik s1(i) xh1(i)\right) \left(∑j=1n Bkj s2(j) xh2(j)\right). Using the fast Fourier transform to compute polynomial multiplication, we can compute c0,…,cb−1 such that ∑i ci xi = (p(x) mod xb) + (p(x) div xb) in time Õ(n2+ nb). An unbiased estimator of (AB)ij with variance at most ‖AB‖F2/b can then be computed as: Cij = s1(i)s2(j)c(h1(i)+h2(j)) mod b. Our approach also leads to an algorithm for computing AB exactly, with high probability, in time Õ(N + nb) in the case where A and B have at most N nonzero entries, and AB has at most b nonzero entries.