Scaling relationships in back-propagation learning
Complex Systems
On the stability, storage capacity, and design of continuous nonlinear neural networks
IEEE Transactions on Systems, Man and Cybernetics
Dynamics and architecture for neural computation
Journal of Complexity - Special Issue on Neural Computation
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Learning state space trajectories in recurrent neural networks
Neural Computation
A learning algorithm for continually running fully recurrent neural networks
Neural Computation
A smoothing regularizer for feedforward and recurrent neural networks
Neural Computation
Recurrent neuro-fuzzy system for fault detection and isolation in nuclear reactors
Advanced Engineering Informatics
Hi-index | 0.00 |
Error backpropagation in feedforward neural network models is a popular learning algorithm that has its roots in nonlinear estimation and optimization. It is being used routinely to calculate error gradients in nonlinear systems with hundreds of thousands of parameters. However, the classical architecture for backpropagation has severe restrictions. The extension of backpropagation to networks with recurrent connections will be reviewed. It is now possible to efficiently compute the error gradients for networks that have temporal dynamics, which opens applications to a host of problems in systems identification and control.