Deriving the kernel from training data

  • Authors:
  • Stefano Merler;Giuseppe Jurman;Cesare Furlanello

  • Affiliations:
  • FBK-irst, Povo, Trento, Italy;FBK-irst, Povo, Trento, Italy;FBK-irst, Povo, Trento, Italy

  • Venue:
  • MCS'07 Proceedings of the 7th international conference on Multiple classifier systems
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we propose a strategy for constructing data-driven kernels, automatically determined by the training examples. Basically, their associated Reproducing Kernel Hilbert Spaces arise from finite sets of linearly independent functions, that can be interpreted as weak classifiers or regressors, learned from training material. When working in the Tikhonov regularization framework, the unique free parameter to be optimized is the regularizer, representing a trade-off between empirical error and smoothness of the solution. A generalization error bound based on Rademacher complexity is provided, yielding the potential for controlling overfitting.