Introduction to the theory of neural computation
Introduction to the theory of neural computation
Advances in neural information processing systems 2
A resource-allocating network for function interpolation
Neural Computation
Regularization theory and neural networks architectures
Neural Computation
The nature of statistical learning theory
The nature of statistical learning theory
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Sparse on-line Gaussian processes
Neural Computation
The Forgetron: A Kernel-Based Perceptron on a Budget
SIAM Journal on Computing
Online prediction of time series data with kernels
IEEE Transactions on Signal Processing
Tighter perceptron with improved dual use of cached data for model representation and validation
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Adaptive constrained learning in reproducing Kernel Hilbert spaces: the robust beamforming case
IEEE Transactions on Signal Processing
Kernel Adaptive Filtering: A Comprehensive Introduction
Kernel Adaptive Filtering: A Comprehensive Introduction
Information theoretic learning with adaptive kernels
Signal Processing
IEEE Transactions on Signal Processing
The kernel recursive least-squares algorithm
IEEE Transactions on Signal Processing
Online Kernel-Based Classification Using Adaptive Projection Algorithms
IEEE Transactions on Signal Processing - Part I
The Kernel Least-Mean-Square Algorithm
IEEE Transactions on Signal Processing
IEEE Transactions on Neural Networks
Orthogonal least squares learning algorithm for radial basis function networks
IEEE Transactions on Neural Networks
Pruning error minimization in least squares support vector machines
IEEE Transactions on Neural Networks
A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation
IEEE Transactions on Neural Networks
An Information Theoretic Approach of Designing Sparse Kernel Adaptive Filters
IEEE Transactions on Neural Networks
Hi-index | 0.08 |
This paper presents a quantized kernel least mean square algorithm with a fixed memory budget, named QKLMS-FB. In order to deal with the growing support inherent in online kernel methods, the proposed algorithm utilizes a pruning criterion, called significance measure, based on a weighted contribution of the existing data centers. The basic idea of the proposed methodology is to discard the center with the smallest influence on the whole system, when a new sample is included in the dictionary. The significance measure can be updated recursively at each step which is suitable for online operation. Furthermore, the proposed methodology does not need any a priori knowledge about the data and its computational complexity is linear with the center number. Experiments show that the proposed algorithm successfully prunes the least ''significant'' centers and preserves the important ones, resulting in a compact KLMS model with little loss in accuracy.