Adaptive filter theory (3rd ed.)
Adaptive filter theory (3rd ed.)
Prediction update algorithms for XCSF: RLS, Kalman filter, and gain adaptation
Proceedings of the 8th annual conference on Genetic and evolutionary computation
A learning algorithm for continually running fully recurrent neural networks
Neural Computation
Convergence analysis of adaptive filtering algorithms with singulardata covariance matrix
IEEE Transactions on Signal Processing
New results on recurrent network training: unifying the algorithms and accelerating convergence
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In the Echo State Networks (ESN) and, more generally, Reservoir Computing paradigms (a recent approach to recurrent neural networks), linear readout weights, i.e., linear output weights, are the only ones actually learned under training. The standard approach for this is SVD-based pseudo-inverse linear regression. Here it will be compared with two well known on-line filters, Least Minimum Squares (LMS) and Recursive Least Squares (RLS). As we shall illustrate, while LMS performance is not satisfactory, RLS can be a good on-line alternative that may deserve further attention.