Efficient algorithms for computing a strong rank-revealing QR factorization
SIAM Journal on Scientific Computing
Matrix computations (3rd ed.)
From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose
IEEE Transactions on Pattern Analysis and Machine Intelligence
Eigentaste: A Constant Time Collaborative Filtering Algorithm
Information Retrieval
Pass efficient algorithms for approximating large matrices
SODA '03 Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms
Acquiring Linear Subspaces for Face Recognition under Variable Lighting
IEEE Transactions on Pattern Analysis and Machine Intelligence
On the Compression of Low Rank Matrices
SIAM Journal on Scientific Computing
SIAM Journal on Computing
On the Nyström Method for Approximating a Gram Matrix for Improved Kernel-Based Learning
The Journal of Machine Learning Research
Robust Object Recognition with Cortex-Like Mechanisms
IEEE Transactions on Pattern Analysis and Machine Intelligence
Relative-Error $CUR$ Matrix Decompositions
SIAM Journal on Matrix Analysis and Applications
An improved approximation algorithm for the column subset selection problem
SODA '09 Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms
Foundations and Trends® in Theoretical Computer Science
FlumeJava: easy, efficient data-parallel pipelines
PLDI '10 Proceedings of the 2010 ACM SIGPLAN conference on Programming language design and implementation
Near Optimal Column-Based Matrix Reconstruction
FOCS '11 Proceedings of the 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science
Optimal column-based low-rank matrix reconstruction
Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms
Column subset selection via sparse approximation of SVD
Theoretical Computer Science
Randomized Algorithms for Matrices and Data
Foundations and Trends® in Machine Learning
RASL: Robust Alignment by Sparse and Low-Rank Decomposition for Linearly Correlated Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
In this paper, we address the column-based low-rank matrix approximation problem using a novel parallel approach. Our approach is based on the divide-and-combine idea. We first perform column selection on submatrices of an original data matrix in parallel, and then combine the selected columns into the final output. Our approach enjoys a theoretical relative-error upper bound. In addition, our column-based low-rank approximation partitions data in a deterministic way and makes no assumptions about matrix coherence. Compared with other traditional methods, our approach is scalable on large-scale matrices. Finally, experiments on both simulated and real world data show that our approach is both efficient and effective.