An Algorithm for the Principal Component Analysis of Large Data Sets

  • Authors:
  • Nathan Halko;Per-Gunnar Martinsson;Yoel Shkolnisky;Mark Tygert

  • Affiliations:
  • nathan.halko@colorado.edu and martinss@colorado.edu;-;yoelsh@post.tau.ac.il;tygert@aya.yale.edu

  • Venue:
  • SIAM Journal on Scientific Computing
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently popularized randomized methods for principal component analysis (PCA) efficiently and reliably produce nearly optimal accuracy—even on parallel processors—unlike the classical (deterministic) alternatives. We adapt one of these randomized methods for use with data sets that are too large to be stored in random-access memory (RAM). (The traditional terminology is that our procedure works efficiently out-of-core.) We illustrate the performance of the algorithm via several numerical examples. For example, we report on the PCA of a data set stored on disk that is so large that less than a hundredth of it can fit in our computer's RAM.