Generalized Low Rank Approximations of Matrices

  • Authors:
  • Jieping Ye

  • Affiliations:
  • Aff1 Aff2

  • Venue:
  • Machine Learning
  • Year:
  • 2005

Quantified Score

Hi-index 0.01

Visualization

Abstract

The problem of computing low rank approximations of matrices is considered. The novel aspect of our approach is that the low rank approximations are on a collection of matrices. We formulate this as an optimization problem, which aims to minimize the reconstruction (approximation) error. To the best of our knowledge, the optimization problem proposed in this paper does not admit a closed form solution. We thus derive an iterative algorithm, namely GLRAM, which stands for the Generalized Low Rank Approximations of Matrices. GLRAM reduces the reconstruction error sequentially, and the resulting approximation is thus improved during successive iterations. Experimental results show that the algorithm converges rapidly.We have conducted extensive experiments on image data to evaluate the effectiveness of the proposed algorithm and compare the computed low rank approximations with those obtained from traditional Singular Value Decomposition (SVD) based methods. The comparison is based on the reconstruction error, misclassification error rate, and computation time. Results show that GLRAM is competitive with SVD for classification, while it has a much lower computation cost. However, GLRAM results in a larger reconstruction error than SVD. To further reduce the reconstruction error, we study the combination of GLRAM and SVD, namely GLRAM + SVD, where SVD is preceded by GLRAM. Results show that when using the same number of reduced dimensions, GLRAM + SVD achieves significant reduction of the reconstruction error as compared to GLRAM, while keeping the computation cost low.