Infinite-σ Limits For Tikhonov Regularization

  • Authors:
  • Ross A. Lippert;Ryan M. Rifkin

  • Affiliations:
  • -;-

  • Venue:
  • The Journal of Machine Learning Research
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the problem of Tikhonov regularization with a general convex loss function: this formalism includes support vector machines and regularized least squares. For a family of kernels that includes the Gaussian, parameterized by a "bandwidth" parameter σ, we characterize the limiting solution as σ → ∞. In particular, we show that if we set the regularization parameter λ = ~λ σ-2p, the regularization term of the Tikhonov problem tends to an indicator function on polynomials of degree ⌊p⌋ (with residual regularization in the case where p ∈ Z). The proof rests on two key ideas: epi-convergence, a notion of functional convergence under which limits of minimizers converge to minimizers of limits, and a value-based formulation of learning, where we work with regularization on the function output values (y) as opposed to the function expansion coefficients in the RKHS. Our result generalizes and unifies previous results in this area.