Kalman Filtering and Neural Networks
Kalman Filtering and Neural Networks
Online State--Space Modeling Using Recurrent Multilayer Perceptrons with Unscented Kalman Filter
Neural Processing Letters
Kalman filtering for neural prediction of response spectra from mining tremors
Computers and Structures
Dynamic subgrouping in RTRL provides a faster O(N/sup 2/) algorithm
ICASSP '00 Proceedings of the Acoustics, Speech, and Signal Processing, 2000. on IEEE International Conference - Volume 06
Adaptive control of a nonlinear dc motor drive using recurrent neural networks
Applied Soft Computing
A learning algorithm for continually running fully recurrent neural networks
Neural Computation
Financial volatility trading using recurrent neural networks
IEEE Transactions on Neural Networks
Training Recurrent Neurocontrollers for Robustness With Derivative-Free Kalman Filter
IEEE Transactions on Neural Networks
Neurocontrol of nonlinear dynamical systems with Kalman filter trained recurrent networks
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
One of the main drawbacks for online learning of recurrent neural networks (RNNs) is the high computational cost of training. Much effort has been spent to reduce the computational complexity of online learning algorithms, usually focusing on the real time recurrent learning (RTRL) algorithm. Significant reductions in complexity of RTRL have been achieved, but with a tradeoff, degradation of model performance. We take a different approach to complexity reduction in online learning of RNNs through a sequential Bayesian filtering framework and propose the ensemble Kalman filter (EnKF) for derivative free parameter estimation. The EnKF provides an online training solution that under certain assumptions can reduce the computational complexity by two orders of magnitude from the original RTRL algorithm without sacrificing the modeling potential of the network. Through forecasting experiments on observed data from nonlinear systems, it is shown that the EnKF trained RNN outperforms other RNN training algorithms in terms of real computational time and also leads to models that produce better forecasts.