Approximating Gaussian Processes with ${\cal H}^2$-Matrices

  • Authors:
  • Steffen Börm;Jochen Garcke

  • Affiliations:
  • Max Planck Institute for Mathematics in the Sciences, Inselstraße 22---26, 04103 Leipzig, Germany;Technische Universität Berlin, Institut für Mathematik, MA 3-3, Straße des 17. Juni 136, 10623 Berlin,

  • Venue:
  • ECML '07 Proceedings of the 18th European conference on Machine Learning
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

To compute the exact solution of Gaussian process regression one needs $\mathcal{O}(N^3)$ computations for direct and $\mathcal{O}(N^2)$ for iterative methods since it involves a densely populated kernel matrix of size N×N, here Ndenotes the number of data. This makes large scale learning problems intractable by standard techniques.We propose to use an alternative approach: the kernel matrix is replaced by a data-sparse approximation, called an ${\mathcal H}^2$-matrix. This matrix can be represented by only ${\cal O}(N m)$ units of storage, where mis a parameter controlling the accuracy of the approximation, while the computation of the ${\mathcal H}^2$-matrix scales with ${\cal O}(N m \log N)$.Practical experiments demonstrate that our scheme leads to significant reductions in storage requirements and computing times for large data sets in lower dimensional spaces.