Nonlinear systems analysis (2nd ed.)
Nonlinear systems analysis (2nd ed.)
Input selection for long-term prediction of time series
IWANN'05 Proceedings of the 8th international conference on Artificial Neural Networks: computational Intelligence and Bioinspired Systems
NLq theory: checking and imposing stability of recurrentneural networks for nonlinear modeling
IEEE Transactions on Signal Processing
Stable dynamic backpropagation learning in recurrent neural networks
IEEE Transactions on Neural Networks
Robust local stability of multilayer recurrent neural networks
IEEE Transactions on Neural Networks
Stability analysis of discrete-time recurrent neural networks
IEEE Transactions on Neural Networks
Gradient calculations for dynamic recurrent neural networks: a survey
IEEE Transactions on Neural Networks
Improving reservoirs using intrinsic plasticity
Neurocomputing
Photonic Reservoir Computing with Coupled Semiconductor Optical Amplifiers
OSC '08 Proceedings of the 1st international workshop on Optical SuperComputing
Nonlinear time series online prediction using reservoir Kalman filter
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Applied Computational Intelligence and Soft Computing
Architectural and Markovian factors of echo state networks
Neural Networks
Engineering Applications of Artificial Intelligence
Hi-index | 0.01 |
We provide a stability analysis based on nonlinear feedback theory for the recently introduced backpropagation-decorrelation (BPDC) recurrent learning algorithm which adapts only the output weights of a possibly large network and therefore can learn in O(N). Using a small gain criterion, we derive a simple sufficient stability inequality. The condition can be monitored online to assure that the recurrent network is stable and can in principle be applied to any network adapting only the output weights. Based on these results the BPDC learning is further enhanced with an efficient online rescaling algorithm to stabilize the network while adapting. In simulations we find that this mechanism improves learning in the provably stable domain. As byproduct we show that BPDC is highly competitive on standard data sets including the recently introduced CATS benchmark data [CATS data. URL: http://www.cis.hut.fi/lendasse/competition/competition.html].