The Kernel Least-Mean-Square Algorithm

  • Authors:
  • Weifeng Liu;P.P. Pokharel;J.C. Principe

  • Affiliations:
  • Univ. of Florida, Gainesville;-;-

  • Venue:
  • IEEE Transactions on Signal Processing
  • Year:
  • 2008

Quantified Score

Hi-index 35.69

Visualization

Abstract

The combination of the famed kernel trick and the least-mean-square (LMS) algorithm provides an interesting sample-by-sample update for an adaptive filter in reproducing kernel Hilbert spaces (RKHS), which is named in this paper the KLMS. Unlike the accepted view in kernel methods, this paper shows that in the finite training data case, the KLMS algorithm is well posed in RKHS without the addition of an extra regularization term to penalize solution norms as was suggested by Kivinen [Kivinen, Smola and Williamson, ldquoOnline Learning With Kernels,rdquo IEEE Transactions on Signal Processing, vol. 52, no. 8, pp. 2165-2176, Aug. 2004] and Smale [Smale and Yao, ldquoOnline Learning Algorithms,rdquo Foundations in Computational Mathematics, vol. 6, no. 2, pp. 145-176, 2006]. This result is the main contribution of the paper and enhances the present understanding of the LMS algorithm with a machine learning perspective. The effect of the KLMS step size is also studied from the viewpoint of regularization. Two experiments are presented to support our conclusion that with finite data the KLMS algorithm can be readily used in high dimensional spaces and particularly in RKHS to derive nonlinear, stable algorithms with comparable performance to batch, regularized solutions.