Learning sequential structure with the real-time recurrent learning algorithm
International Journal of Neural Systems
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Learning to Forget: Continual Prediction with LSTM
Neural Computation
Neural Computation
A learning algorithm for continually running fully recurrent neural networks
Neural Computation
LSTM recurrent networks learn simple context-free and context-sensitive languages
IEEE Transactions on Neural Networks
Gradient calculations for dynamic recurrent neural networks: a survey
IEEE Transactions on Neural Networks
Learning the Long-Term Structure of the Blues
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Hi-index | 0.00 |
Long Short-Term Memory (LSTM) recurrent neural networks (RNNs) outperform traditional RNNs when dealing with sequences involving not only short-term but also long-term dependencies. The decoupled extended Kalman filter learning algorithm (DEKF) works well in online environments and reduces significantly the number of training steps when compared to the standard gradient-descent algorithms. Previous work on LSTM, however, has always used a form of gradient descent and has not focused on true online situations. Here we combine LSTM with DEKF and show that this new hybrid improves upon the original learning algorithm when applied to online processing.