Matrix analysis
Fast monte-carlo algorithms for finding low-rank approximations
Journal of the ACM (JACM)
Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform
Proceedings of the thirty-eighth annual ACM symposium on Theory of computing
Fast Monte Carlo Algorithms for Matrices II: Computing a Low-Rank Approximation to a Matrix
SIAM Journal on Computing
Improved Approximation Algorithms for Large Matrices via Random Projections
FOCS '06 Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science
Sampling from large matrices: An approach through geometric functional analysis
Journal of the ACM (JACM)
Fast dimension reduction using Rademacher series on dual BCH codes
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
Adaptive sampling and fast low-rank matrix approximation
APPROX'06/RANDOM'06 Proceedings of the 9th international conference on Approximation Algorithms for Combinatorial Optimization Problems, and 10th international conference on Randomization and Computation
Subspace sampling and relative-error matrix approximation: column-based methods
APPROX'06/RANDOM'06 Proceedings of the 9th international conference on Approximation Algorithms for Combinatorial Optimization Problems, and 10th international conference on Randomization and Computation
Fast Algorithms for Approximating the Singular Value Decomposition
ACM Transactions on Knowledge Discovery from Data (TKDD)
Blendenpik: Supercharging LAPACK's Least-Squares Solver
SIAM Journal on Scientific Computing
Multiplicative approximations of random walk transition probabilities
APPROX'11/RANDOM'11 Proceedings of the 14th international workshop and 15th international conference on Approximation, randomization, and combinatorial optimization: algorithms and techniques
Low rank matrix-valued chernoff bounds and approximate matrix multiplication
Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms
Randomized Algorithms for Matrices and Data
Foundations and Trends® in Machine Learning
Low rank approximation and regression in input sparsity time
Proceedings of the forty-fifth annual ACM symposium on Theory of computing
Hi-index | 0.00 |
The low-rank matrix approximation problem involves finding of a rank k version of a m x n matrix A, labeled Ak, such that Ak is as "close" as possible to the best SVD approximation version of A at the same rank level. Previous approaches approximate matrix A by non-uniformly adaptive sampling some columns (or rows) of A, hoping that this subset of columns contain enough information about A. The sub-matrix is then used for the approximation process. However, these approaches are often computationally intensive due to the complexity in the adaptive sampling. In this paper, we propose a fast and efficient algorithm which at first pre-processes matrix A in order to spread out information (energy) of every columns (or rows) of A, then randomly selects some of its columns (or rows). Finally, a rank-k approximation is generated from the row space of these selected sets. The preprocessing step is performed by uniformly randomizing signs of entries of A and transforming all columns of A by an orthonormal matrix F with existing fast implementation (e.g. Hadamard, FFT, DCT...). Our main contribution is summarized as follows. 1) We show that by uniformly selecting at random d rows of the preprocessed matrix with d = ( 1/η k max {log k, log 1/β} ), we guarantee the relative Frobenius norm error approximation: (1 + η) norm{A - Ak}F with probability at least 1 - 5β. 2) With d above, we establish a spectral norm error approximation: (2 + √2m/d) norm{A - Ak}2 with probability at least 1 - 2β. 3) The algorithm requires 2 passes over the data and runs in time (mn log d + (m+n) d2) which, as far as the best of our knowledge, is the fastest algorithm when the matrix A is dense. 4) As a bonus, applying this framework to the well-known least square approximation problem min norm{A x - b} where A ∈ Rm x r, we show that by randomly choosing d = (1/η γ r log m), the approximation solution is proportional to the optimal one with a factor of η and with extremely high probability, (1 - 6 m-γ), say.