Multilayer feedforward networks are universal approximators
Neural Networks
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Finite impulse response neural networks with applications in time series prediction
Finite impulse response neural networks with applications in time series prediction
Optimal terminal control using feedforward neural networks
Optimal terminal control using feedforward neural networks
Backpropagation applied to handwritten zip code recognition
Neural Computation
A learning algorithm for continually running fully recurrent neural networks
Neural Computation
On the Need for a Neural Abstract Machine
Sequence Learning - Paradigms, Algorithms, and Applications
MIGA, A Software Tool for Nonlinear System Modelling with Modular Neural Networks
Applied Intelligence
A Signal-Flow-Graph Approach to On-line Gradient Calculation
Neural Computation
Locally recurrent neural networks for wind speed prediction using spatial correlation
Information Sciences: an International Journal
Adjoint Systems for Models of Cell Signaling Pathways and their Application to Parameter Fitting
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
A fuzzy-neural multi-model for nonlinear systems identification and control
Fuzzy Sets and Systems
Centralized Indirect Control of an Anaerobic Digestion Bioprocess Using Recurrent Neural Identifier
AIMSA '08 Proceedings of the 13th international conference on Artificial Intelligence: Methodology, Systems, and Applications
Direct Adaptive Soft Computing Neural Control of a Continuous Bioprocess via Second Order Learning
MICAI '09 Proceedings of the 8th Mexican International Conference on Artificial Intelligence
On-line learning algorithm based on signal flow graph theory for PID neural networks
CCDC'09 Proceedings of the 21st annual international conference on Chinese control and decision conference
MICAI'07 Proceedings of the artificial intelligence 6th Mexican international conference on Advances in artificial intelligence
Recurrent neural control of a continuous bioprocess using first and second order learning
MICAI'12 Proceedings of the 11th Mexican international conference on Advances in Computational Intelligence - Volume Part II
Hi-index | 0.00 |
Deriving gradient algorithms for time-dependent neural network structures typically requires numerous chain rule expansions, diligent bookkeeping, and careful manipulation of terms. In this paper, we show how to derive such algorithms via a set of simple block diagram manipulation rules. The approach provides a common framework to derive popular algorithms including backpropagation and backpropagation-through-time without a single chain rule expansion. Additional examples are provided for a variety of complicated architectures to illustrate both the generality and the simplicity of the approach.