Using a New Model of Recurrent Neural Network for Control
Neural Processing Letters
Automated Recurrent Neural Network Design of a Neural Controller in a Custom Power Device
Journal of Intelligent and Robotic Systems
A Stochastic Neural Model for Fast Identification of Spatiotemporal Sequences
Neural Processing Letters
Spline Recurrent Neural Networks for Quad-Tree Video Coding
WIRN VIETRI 2002 Proceedings of the 13th Italian Workshop on Neural Nets-Revised Papers
Sound Synthesis by Flexible Activation Function Recurrent Neural Networks
WIRN VIETRI 2002 Proceedings of the 13th Italian Workshop on Neural Nets-Revised Papers
Control of a Robotic Wheelchair Using Recurrent Networks
Autonomous Robots
A hybrid genetic-neural architecture for stock indexes forecasting
Information Sciences: an International Journal - Special issue: Computational intelligence in economics and finance
A Signal-Flow-Graph Approach to On-line Gradient Calculation
Neural Computation
Fault detection in catalytic cracking converter by means of probability density approximation
Engineering Applications of Artificial Intelligence
Locally recurrent neural networks for wind speed prediction using spatial correlation
Information Sciences: an International Journal
A neural network with a case based dynamic window for stock trading prediction
Expert Systems with Applications: An International Journal
On-line learning algorithm based on signal flow graph theory for PID neural networks
CCDC'09 Proceedings of the 21st annual international conference on Chinese control and decision conference
Dynamic neural networks applied to melody retrieval
MICAI'10 Proceedings of the 9th Mexican international conference on Artificial intelligence conference on Advances in soft computing: Part II
Time delay learning by gradient descent in recurrent neural networks
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Time window width influence on dynamic BPTT(h) learning algorithm performances: experimental study
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Fatigue crack growth estimation by relevance vector machine
Expert Systems with Applications: An International Journal
Hi-index | 0.01 |
This paper focuses on online learning procedures for locally recurrent neural nets with emphasis on multilayer perceptron (MLP) with infinite impulse response (IIR) synapses and its variations which include generalized output and activation feedback multilayer networks (MLN). We propose a new gradient-based procedure called recursive backpropagation (RBP) whose online version, causal recursive backpropagation (CRBP), has some advantages over other online methods. CRBP includes as particular cases backpropagation (BP), temporal BP, Back-Tsoi algorithm (1991) among others, thereby providing a unifying view on gradient calculation for recurrent nets with local feedback. The only learning method known for locally recurrent nets with no architectural restriction is the one by Back and Tsoi. The proposed algorithm has better stability and faster convergence with respect to the Back-Tsoi algorithm. The computational complexity of the CRBP is comparable with that of the Back-Tsoi algorithm, e.g., less that a factor of 1.5 for usual architectures and parameter settings. The superior performance of the new algorithm, however, easily justifies this small increase in computational burden. In addition, the general paradigms of truncated BPTT and RTRL are applied to networks with local feedback and compared with CRBP. CRBP exhibits similar performances and the detailed analysis of complexity reveals that CRBP is much simpler and easier to implement, e.g., CRBP is local in space and in time while RTRL is not local in space