A resource-allocating network for function interpolation
Neural Computation
On the momentum term in gradient descent learning algorithms
Neural Networks
How good are support vector machines?
Neural Networks
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
A Generalized Representer Theorem
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
A recurrent RBF network for non-linear channel
ICASSP '01 Proceedings of the Acoustics, Speech, and Signal Processing, 200. on IEEE International Conference - Volume 02
Generalized multiscale radial basis function networks
Neural Networks
IEEE Transactions on Pattern Analysis and Machine Intelligence
Time Series Prediction Based on Recurrent LS-SVM with Mixed Kernel
APCIP '09 Proceedings of the 2009 Asia-Pacific Conference on Information Processing - Volume 01
Regularized Recurrent Least Squares Support Vector Machines
IJCBS '09 Proceedings of the 2009 International Joint Conference on Bioinformatics, Systems Biology and Intelligent Computing
Online prediction of time series data with kernels
IEEE Transactions on Signal Processing
Shared Kernel Information Embedding for Discriminative Inference
IEEE Transactions on Pattern Analysis and Machine Intelligence
Set-membership binormalized data-reusing LMS algorithms
IEEE Transactions on Signal Processing
Analysis of momentum adaptive filtering algorithms
IEEE Transactions on Signal Processing
Nonlinear adaptive prediction of speech with a pipelined recurrentneural network
IEEE Transactions on Signal Processing
Support vector machine techniques for nonlinear equalization
IEEE Transactions on Signal Processing
BEACON: an adaptive set-membership filtering technique with sparseupdates
IEEE Transactions on Signal Processing
IEEE Transactions on Signal Processing
The kernel recursive least-squares algorithm
IEEE Transactions on Signal Processing
The Kernel Least-Mean-Square Algorithm
IEEE Transactions on Signal Processing
Low-complexity data reusing methods in adaptive filtering
IEEE Transactions on Signal Processing
IEEE Transactions on Neural Networks
An Information Theoretic Approach of Designing Sparse Kernel Adaptive Filters
IEEE Transactions on Neural Networks
Gradient calculations for dynamic recurrent neural networks: a survey
IEEE Transactions on Neural Networks
Robust Initialization of a Jordan Network With Recurrent Constrained Learning
IEEE Transactions on Neural Networks - Part 2
Robust Set-Membership Affine-Projection Adaptive-Filtering Algorithm
IEEE Transactions on Signal Processing
An information theoretic sparse kernel algorithm for online learning
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
In this paper, we propose a recurrent kernel algorithm with selectively sparse updates for online learning. The algorithm introduces a linear recurrent term in the estimation of the current output. This makes the past information reusable for updating of the algorithm in the form of a recurrent gradient term. To ensure that the reuse of this recurrent gradient indeed accelerates the convergence speed, a novel hybrid recurrent training is proposed to switch on or off learning the recurrent information according to the magnitude of the current training error. Furthermore, the algorithm includes a data-dependent adaptive learning rate which can provide guaranteed system weight convergence at each training iteration. The learning rate is set as zero when the training violates the derived convergence conditions, which makes the algorithm updating process sparse. Theoretical analyses of the weight convergence are presented and experimental results show the good performance of the proposed algorithm in terms of convergence speed and estimation accuracy.