Kernel Component Analysis Using an Epsilon-Insensitive Robust Loss Function

  • Authors:
  • C. Alzate;J. Suykens

  • Affiliations:
  • Dept. of Electr. Eng., Katholieke Univ. Leuven, Leuven;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Kernel principal component analysis (PCA) is a technique to perform feature extraction in a high-dimensional feature space, which is nonlinearly related to the original input space. The kernel PCA formulation corresponds to an eigendecomposition of the kernel matrix: eigenvectors with large eigenvalues correspond to the principal components in the feature space. Starting from the least squares support vector machine (LS-SVM) formulation to kernel PCA, we extend it to a generalized form of kernel component analysis (KCA) with a general underlying loss function made explicit. For classical kernel PCA, the underlying loss function is L 2 . In this generalized form, one can plug in also other loss functions. In the context of robust statistics, it is known that the L 2 loss function is not robust because its influence function is not bounded. Therefore, outliers can skew the solution from the desired one. Another issue with kernel PCA is the lack of sparseness: the principal components are dense expansions in terms of kernel functions. In this paper, we introduce robustness and sparseness into kernel component analysis by using an epsilon-insensitive robust loss function. We propose two different algorithms. The first method solves a set of nonlinear equations with kernel PCA as starting points. The second method uses a simplified iterative weighting procedure that leads to solving a sequence of generalized eigenvalue problems. Simulations with toy and real-life data show improvements in terms of robustness together with a sparse representation.