Improved Approximation Algorithms for Large Matrices via Random Projections

  • Authors:
  • Tamas Sarlos

  • Affiliations:
  • Eötvös University and Computer and Automation Research Institute, Hungary

  • Venue:
  • FOCS '06 Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science
  • Year:
  • 2006

Quantified Score

Hi-index 0.02

Visualization

Abstract

Recently several results appeared that show significant reduction in time for matrix multiplication, singular value decomposition as well as linear (\ell_ 2) regression, all based on data dependent random sampling. Our key idea is that low dimensional embeddings can be used to eliminate data dependence and provide more versatile, linear time pass efficient matrix computation. Our main contribution is summarized as follows. --Independent of the recent results of Har-Peled and of Deshpande and Vempala, one of the first -- and to the best of our knowledge the most efficient -- relative error (1 + \in) \parallel A - A_k \parallel _Fapproximation algorithms for the singular value decomposition of an m 脳 n matrix A with M non-zero entries that requires 2 passes over the data and runs in time O\left( {\left( {M(\frac{k} { \in } + k\log k) + (n + m)(\frac{k} { \in } + k\log k)^2 } \right)\log \frac{1} {\delta }} \right) --The first o(nd^{2}) time (1 + \in) relative error approximation algorithm for n 脳 d linear (\ell_2) regression. --A matrix multiplication and norm approximation algorithm that easily applies to implicitly given matrices and can be used as a black box probability boosting tool.