Kernel basis pursuit

  • Authors:
  • Vincent Guigue;Alain Rakotomamonjy;Stéphane Canu

  • Affiliations:
  • Lab. Perception, Systémes, Information, CNRS, FRE 2645, St Étienne du Rouvray;Lab. Perception, Systémes, Information, CNRS, FRE 2645, St Étienne du Rouvray;Lab. Perception, Systémes, Information, CNRS, FRE 2645, St Étienne du Rouvray

  • Venue:
  • ECML'05 Proceedings of the 16th European conference on Machine Learning
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Estimating a non-uniformly sampled function from a set of learning points is a classical regression problem. Kernel methods have been widely used in this context, but every problem leads to two major tasks: optimizing the kernel and setting the fitness-regularization compromise. This article presents a new method to estimate a function from noisy learning points in the context of RKHS (Reproducing Kernel Hilbert Space). We introduce the Kernel Basis Pursuit algorithm, which enables us to build a ℓ1-regularized-multiple-kernel estimator. The general idea is to decompose the function to learn on a sparse-optimal set of spanning functions. Our implementation relies on the Least Absolute Shrinkage and Selection Operator (LASSO) formulation and on the Least Angle Regression (LARS) solver. The computation of the full regularization path, through the LARS, will enable us to propose new adaptive criteria to find an optimal fitness-regularization compromise. Finally, we aim at proposing a fast parameter-free method to estimate non-uniform-sampled functions.