Learning with non-positive kernels

  • Authors:
  • Cheng Soon Ong;Xavier Mary;Stéphane Canu;Alexander J. Smola

  • Affiliations:
  • RSISE, Australian National University, ACT, Australia;ENSAE-CREST-LS, avenue Pierre Larousse, Malakoff, France;Laboratoire PSI FRE CNRS 2645 - INSA de Rouen, Mont-Saint-Aignan Cedex, France;Australian National University, ACT, Australia

  • Venue:
  • ICML '04 Proceedings of the twenty-first international conference on Machine learning
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we show that many kernel methods can be adapted to deal with indefinite kernels, that is, kernels which are not positive semidefinite. They do not satisfy Mercer's condition and they induce associated functional spaces called Reproducing Kernel Kre&icaron;n Spaces (RKKS), a generalization of Reproducing Kernel Hilbert Spaces (RKHS).Machine learning in RKKS shares many "nice" properties of learning in RKHS, such as orthogonality and projection. However, since the kernels are indefinite, we can no longer minimize the loss, instead we stabilize it. We show a general representer theorem for constrained stabilization and prove generalization bounds by computing the Rademacher averages of the kernel class. We list several examples of indefinite kernels and investigate regularization methods to solve spline interpolation. Some preliminary experiments with indefinite kernels for spline smoothing are reported for truncated spectral factorization, Landweber-Fridman iterations, and MR-II.