A resource-allocating network for function interpolation
Neural Computation
Regularization theory and neural networks architectures
Neural Computation
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Adaptive filters with error nonlinearities: mean-square analysis and optimum design
EURASIP Journal on Applied Signal Processing - Nonlinear signal and image processing - part I
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Online prediction of time series data with kernels
IEEE Transactions on Signal Processing
Extended kernel recursive least squares algorithm
IEEE Transactions on Signal Processing
Kernel Adaptive Filtering: A Comprehensive Introduction
Kernel Adaptive Filtering: A Comprehensive Introduction
The kernel recursive least-squares algorithm
IEEE Transactions on Signal Processing
A unified approach to the steady-state and tracking analyses ofadaptive filters
IEEE Transactions on Signal Processing
The Kernel Least-Mean-Square Algorithm
IEEE Transactions on Signal Processing
Transient analysis of data-normalized adaptive filters
IEEE Transactions on Signal Processing
Transient analysis of adaptive filters with error nonlinearities
IEEE Transactions on Signal Processing
Kernel minimum error entropy algorithm
Neurocomputing
Hi-index | 0.08 |
In this paper, we study the mean square convergence of the kernel least mean square (KLMS). The fundamental energy conservation relation has been established in feature space. Starting from the energy conservation relation, we carry out the mean square convergence analysis and obtain several important theoretical results, including an upper bound on step size that guarantees the mean square convergence, the theoretical steady-state excess mean square error (EMSE), an optimal step size for the fastest convergence, and an optimal kernel size for the fastest initial convergence. Monte Carlo simulation results agree with the theoretical analysis very well.