Local Dimensionality Reduction for Non-Parametric Regression

  • Authors:
  • Heiko Hoffmann;Stefan Schaal;Sethu Vijayakumar

  • Affiliations:
  • IPAB, School of Informatics, University of Edinburgh, Edinburgh, UK EH9 3JZ and Biomedical Engineering, University of Southern California, Los Angeles, USA 90089-2905;Computer Science and Neuroscience, University of Southern California, Los Angeles, USA 90089-2905;IPAB, School of Informatics, University of Edinburgh, Edinburgh, UK EH9 3JZ

  • Venue:
  • Neural Processing Letters
  • Year:
  • 2009

Quantified Score

Hi-index 0.01

Visualization

Abstract

Locally-weighted regression is a computationally-efficient technique for non-linear regression. However, for high-dimensional data, this technique becomes numerically brittle and computationally too expensive if many local models need to be maintained simultaneously. Thus, local linear dimensionality reduction combined with locally-weighted regression seems to be a promising solution. In this context, we review linear dimensionality-reduction methods, compare their performance on non-parametric locally-linear regression, and discuss their ability to extend to incremental learning. The considered methods belong to the following three groups: (1) reducing dimensionality only on the input data, (2) modeling the joint input-output data distribution, and (3) optimizing the correlation between projection directions and output data. Group 1 contains principal component regression (PCR); group 2 contains principal component analysis (PCA) in joint input and output space, factor analysis, and probabilistic PCA; and group 3 contains reduced rank regression (RRR) and partial least squares (PLS) regression. Among the tested methods, only group 3 managed to achieve robust performance even for a non-optimal number of components (factors or projection directions). In contrast, group 1 and 2 failed for fewer components since these methods rely on the correct estimate of the true intrinsic dimensionality. In group 3, PLS is the only method for which a computationally-efficient incremental implementation exists. Thus, PLS appears to be ideally suited as a building block for a locally-weighted regressor in which projection directions are incrementally added on the fly.