Unsupervised Learning in LSTM Recurrent Neural Networks
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
Learning the Long-Term Structure of the Blues
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Applying LSTM to Time Series Predictable through Time-Window Approaches
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
Error entropy minimization for LSTM training
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Hi-index | 0.00 |
The size of the time intervals between events conveys information essential for numerous sequential tasks such as motor control and rhythm detection. While Hidden Markov Models tend to ignore this information, recurrent neural networks (RNNs) can in principle learn to make use of it. We focus on Long Short-Term Memory (LSTM) because it usually outperforms other RNNs. Surprisingly, LSTM augmented by 驴peephole connections驴 from its internal cells to its multiplicative gates can learn the fine distinction between sequences of spikes separated by either 50 or 49 discrete time steps, without the help of any short training exemplars. Without external resets or teacher forcing or loss of performance on tasks reported earlier, our LSTM variant also learns to generate very stable sequences of highly nonlinear, precisely timed spikes. This makes LSTM a promising approach for real-world tasks that require to time and count.