Matrix multiplication via arithmetic progressions
STOC '87 Proceedings of the nineteenth annual ACM symposium on Theory of computing
An extended set of FORTRAN basic linear algebra subprograms
ACM Transactions on Mathematical Software (TOMS)
Basic Linear Algebra Subprograms for Fortran Usage
ACM Transactions on Mathematical Software (TOMS)
Database-friendly random projections: Johnson-Lindenstrauss with binary coins
Journal of Computer and System Sciences - Special issu on PODS 2001
Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform
Proceedings of the thirty-eighth annual ACM symposium on Theory of computing
Matrix-vector multiplication in sub-quadratic time: (some preprocessing required)
SODA '07 Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms
Fast dimension reduction using Rademacher series on dual BCH codes
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
Accuracy estimate and optimization techniques for SimRank computation
The VLDB Journal — The International Journal on Very Large Data Bases
Efficient methods for grouping vectors into low-rank clusters
Journal of Computational Physics
A new algorithm for linear and integer feasibility in horn constraints
CPAIOR'11 Proceedings of the 8th international conference on Integration of AI and OR techniques in constraint programming for combinatorial optimization problems
Detecting low-rank clusters via random sampling
Journal of Computational Physics
Sampling techniques for monte carlo matrix multiplication with applications to image processing
MCPR'12 Proceedings of the 4th Mexican conference on Pattern Recognition
Hi-index | 0.90 |
Given an mxn matrix A we are interested in applying it to a real vector x@?R^n in less than the straightforward O(mn) time. For an exact deterministic computation at the very least all entries in A must be accessed, requiring O(mn) operations and matching the running time of naively applying A to x. However, we claim that if the matrix contains only a constant number of distinct values, then reading the matrix once in O(mn) steps is sufficient to preprocess it such that any subsequent application to vectors requires only O(mn/log(max{m,n})) operations. Algorithms for matrix-vector multiplication over finite fields, which save a log factor, have been known for many years. Our contribution is unique in its simplicity and in the fact that it applies also to real valued vectors. Using our algorithm improves on recent results for dimensionality reduction. It gives the first known random projection process exhibiting asymptotically optimal running time. The mailman algorithm is also shown to be useful (faster than naive) even for small matrices.