Pass efficient algorithms for approximating large matrices
SODA '03 Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms
Fast monte-carlo algorithms for finding low-rank approximations
Journal of the ACM (JACM)
Matrix approximation and projective clustering via volume sampling
SODA '06 Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm
Sampling algorithms for l2 regression and applications
SODA '06 Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm
Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication
SIAM Journal on Computing
Fast Monte Carlo Algorithms for Matrices II: Computing a Low-Rank Approximation to a Matrix
SIAM Journal on Computing
SIAM Journal on Computing
Improved Approximation Algorithms for Large Matrices via Random Projections
FOCS '06 Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science
Fast computation of low-rank matrix approximations
Journal of the ACM (JACM)
Sums of random symmetric matrices and quadratic optimization under orthogonality constraints
Mathematical Programming: Series A and B
Sampling from large matrices: An approach through geometric functional analysis
Journal of the ACM (JACM)
Dimensionality Reductions in ℓ2 that Preserve Volumes and Distance to Affine Spaces
Discrete & Computational Geometry
Graph sparsification by effective resistances
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
Tighter bounds for random projections of manifolds
Proceedings of the twenty-fourth annual symposium on Computational geometry
An improved approximation algorithm for the column subset selection problem
SODA '09 Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms
Improved approximation bound for quadratic optimization problems with orthogonality constraints
SODA '09 Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms
Numerical linear algebra in the streaming model
Proceedings of the forty-first annual ACM symposium on Theory of computing
A fast and efficient algorithm for low-rank approximation of a matrix
Proceedings of the forty-first annual ACM symposium on Theory of computing
A Randomized Algorithm for Principal Component Analysis
SIAM Journal on Matrix Analysis and Applications
Efficient Volume Sampling for Row/Column Subset Selection
FOCS '10 Proceedings of the 2010 IEEE 51st Annual Symposium on Foundations of Computer Science
Faster least squares approximation
Numerische Mathematik
Moment inequalities for sums of random matrices and their applications in optimization
Mathematical Programming: Series A and B
A fast random sampling algorithm for sparsifying matrices
APPROX'06/RANDOM'06 Proceedings of the 9th international conference on Approximation Algorithms for Combinatorial Optimization Problems, and 10th international conference on Randomization and Computation
Strong converse for identification via quantum channels
IEEE Transactions on Information Theory
Multiplicative approximations of random walk transition probabilities
APPROX'11/RANDOM'11 Proceedings of the 14th international workshop and 15th international conference on Approximation, randomization, and combinatorial optimization: algorithms and techniques
Randomized Algorithms for Matrices and Data
Foundations and Trends® in Machine Learning
A matrix hyperbolic cosine algorithm and applications
ICALP'12 Proceedings of the 39th international colloquium conference on Automata, Languages, and Programming - Volume Part I
Low rank approximation and regression in input sparsity time
Proceedings of the forty-fifth annual ACM symposium on Theory of computing
Hi-index | 0.00 |
In this paper we develop algorithms for approximating matrix multiplication with respect to the spectral norm. Let A ∈ Rnxm and B ∈ Rnxp be two matrices and ε 0. We approximate the product AT B using two sketches à ∈ Rtxm and B ∈ Rtxp, where t n, such that [EQUATION] with high probability. We analyze two different sampling procedures for constructing à and B; one of them is done by i.i.d. non-uniform sampling rows from A and B and the other by taking random linear combinations of their rows. We prove bounds on t that depend only on the intrinsic dimensionality of A and B, that is their rank and their stable rank. For achieving bounds that depend on rank when taking random linear combinations we employ standard tools from high-dimensional geometry such as concentration of measure arguments combined with elaborate ε-net constructions. For bounds that depend on the smaller parameter of stable rank this technology itself seems weak. However, we show that in combination with a simple truncation argument it is amenable to provide such bounds. To handle similar bounds for row sampling, we develop a novel matrix-valued Chernoff bound inequality which we call low rank matrix-valued Chernoff bound. Thanks to this inequality, we are able to give bounds that depend only on the stable rank of the input matrices. We highlight the usefulness of our approximate matrix multiplication bounds by supplying two applications. First we give an approximation algorithm for the l2-regression problem that returns an approximate solution by randomly projecting the initial problem to dimensions linear on the rank of the constraint matrix. Second we give improved approximation algorithms for the low rank matrix approximation problem with respect to the spectral norm.