Competitive recommendation systems
STOC '02 Proceedings of the thiry-fourth annual ACM symposium on Theory of computing
Pass efficient algorithms for approximating large matrices
SODA '03 Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms
Clustering Large Graphs via the Singular Value Decomposition
Machine Learning
Fast monte-carlo algorithms for finding low-rank approximations
Journal of the ACM (JACM)
Subgradient and sampling algorithms for l1 regression
SODA '05 Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms
On the Nyström Method for Approximating a Gram Matrix for Improved Kernel-Based Learning
The Journal of Machine Learning Research
Fast dimension reduction using Rademacher series on dual BCH codes
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
Sampling subproblems of heterogeneous Max-Cut problems and approximation algorithms
Random Structures & Algorithms
Graph sparsification by effective resistances
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
Exposure-Resilient Extractors and the Derandomization of Probabilistic Sublinear Time
Computational Complexity
Separating Sublinear Time Computations by Approximate Diameter
COCOA 2008 Proceedings of the 2nd international conference on Combinatorial Optimization and Applications
Dense Fast Random Projections and Lean Walsh Transforms
APPROX '08 / RANDOM '08 Proceedings of the 11th international workshop, APPROX 2008, and 12th international workshop, RANDOM 2008 on Approximation, Randomization and Combinatorial Optimization: Algorithms and Techniques
A platform for developing adaptable multicore applications
CASES '09 Proceedings of the 2009 international conference on Compilers, architecture, and synthesis for embedded systems
Foundations and Trends® in Theoretical Computer Science
An experimental evaluation of a Monte-Carlo algorithm for singular value decomposition
PCI'01 Proceedings of the 8th Panhellenic conference on Informatics
Spectral methods for matrices and tensors
Proceedings of the forty-second ACM symposium on Theory of computing
Collaborative scoring with dishonest participants
Proceedings of the twenty-second annual ACM symposium on Parallelism in algorithms and architectures
Stochastic algorithms in linear algebra: beyond the Markov chains and von Neumann-Ulam scheme
NMA'10 Proceedings of the 7th international conference on Numerical methods and applications
Multiplicative approximations of random walk transition probabilities
APPROX'11/RANDOM'11 Proceedings of the 14th international workshop and 15th international conference on Approximation, randomization, and combinatorial optimization: algorithms and techniques
SIAM Journal on Scientific Computing
Approximating a gram matrix for improved kernel-based learning
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Sampling sub-problems of heterogeneous max-cut problems and approximation algorithms
STACS'05 Proceedings of the 22nd annual conference on Theoretical Aspects of Computer Science
Hi-index | 0.00 |
Given an m 脳 n matrix A and an n 脳 p matrix B, we present 2 simple and intuitive algorithms to compute an approximation P to the product A 驴 B, with provable bounds for the norm of the "error matrix" P - A 驴 B. Both algorithms run in 0(mp+mn+np) time. In both algorithms, we randomly pick s = 0(1) columns of A to form an m 脳 s matrix S and the corresponding rows of B to form an s 脳 p matrix R. After scaling the columns of S and the rows of R, we multiply them together to obtain our approximation P. The choice of the probability distribution we use for picking the columns of A and the scaling are the crucial features which enable us to fairly elementary proofs of the error bounds. Our first algorithm can be implemented without storing the matrices A and B in Random Access Memory, provided we can make two passes through the matrices (stored in external memory). The second algorithm has a smaller bound on the 2-norm of the error matrix, but requires storage of A and B in RAM. We also present a fast algorithm that "describes" P as a sum of rank one matrices if B = AT.