Topics in matrix analysis
Information retrieval using a singular value decomposition model of latent semantic structure
SIGIR '88 Proceedings of the 11th annual international ACM SIGIR conference on Research and development in information retrieval
Matrix computations (3rd ed.)
Hadamard matrix analysis and synthesis: with applications to communications and signal/image processing
Latent semantic indexing: a probabilistic analysis
PODS '98 Proceedings of the seventeenth ACM SIGACT-SIGMOD-SIGART symposium on Principles of database systems
An Improved Algorithm for Computing the Singular Value Decomposition
ACM Transactions on Mathematical Software (TOMS)
Fast computation of low rank matrix approximations
STOC '01 Proceedings of the thirty-third annual ACM symposium on Theory of computing
Low-Rank Approximations with Sparse Factors I: Basic Algorithms and Error Analysis
SIAM Journal on Matrix Analysis and Applications
Fast Monte-Carlo Algorithms for finding low-rank approximations
FOCS '98 Proceedings of the 39th Annual Symposium on Foundations of Computer Science
Experiments with random projections for machine learning
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication
SIAM Journal on Computing
Fast Monte Carlo Algorithms for Matrices II: Computing a Low-Rank Approximation to a Matrix
SIAM Journal on Computing
Improved Approximation Algorithms for Large Matrices via Random Projections
FOCS '06 Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science
Fast computation of low-rank matrix approximations
Journal of the ACM (JACM)
Subspace sampling and relative-error matrix approximation: column-row-based methods
ESA'06 Proceedings of the 14th conference on Annual European Symposium - Volume 14
Journal of Cognitive Neuroscience
Less is More: Sparse Graph Mining with Compact Matrix Decomposition
Statistical Analysis and Data Mining
Fast dimension reduction using Rademacher series on dual BCH codes
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
A fast and efficient algorithm for low-rank approximation of a matrix
Proceedings of the forty-first annual ACM symposium on Theory of computing
A Randomized Algorithm for Principal Component Analysis
SIAM Journal on Matrix Analysis and Applications
An experimental evaluation of a Monte-Carlo algorithm for singular value decomposition
PCI'01 Proceedings of the 8th Panhellenic conference on Informatics
Faster least squares approximation
Numerische Mathematik
Adaptive sampling and fast low-rank matrix approximation
APPROX'06/RANDOM'06 Proceedings of the 9th international conference on Approximation Algorithms for Combinatorial Optimization Problems, and 10th international conference on Randomization and Computation
WI-IAT '12 Proceedings of the The 2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology - Volume 02
Hi-index | 0.00 |
A low-rank approximation to a matrix A is a matrix with significantly smaller rank than A, and which is close to A according to some norm. Many practical applications involving the use of large matrices focus on low-rank approximations. By reducing the rank or dimensionality of the data, we reduce the complexity of analyzing the data. The singular value decomposition is the most popular low-rank matrix approximation. However, due to its expensive computational requirements, it has often been considered intractable for practical applications involving massive data. Recent developments have tried to address this problem, with several methods proposed to approximate the decomposition with better asymptotic runtime. We present an empirical study of these techniques on a variety of dense and sparse datasets. We find that a sampling approach of Drineas, Kannan and Mahoney is often, but not always, the best performing method. This method gives solutions with high accuracy much faster than classical SVD algorithms, on large sparse datasets in particular. Other modern methods, such as a recent algorithm by Rokhlin and Tygert, also offer savings compared to classical SVD algorithms. The older sampling methods of Achlioptas and McSherry are shown to sometimes take longer than classical SVD.