Nonlinear differential equations and dynamical systems
Nonlinear differential equations and dynamical systems
Back propagation in non-feedforward networks
Neural computing architectures
Introduction to the theory of neural computation
Introduction to the theory of neural computation
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
The “moving targets” training algorithm
Advances in neural information processing systems 2
Collective phenomena in neural networks
Models of neural networks
Learning state space trajectories in recurrent neural networks
Neural Computation
A learning algorithm for continually running fully recurrent neural networks
Neural Computation
A general weight matrix formulation using optimal control
IEEE Transactions on Neural Networks
ACM SIGART Bulletin
Sequence Learning - Paradigms, Algorithms, and Applications
Hi-index | 0.00 |
The activation and weight dynamics of Artificial Neural Networks are derived from a partial differential equation (PDE) that may incorporate weights either as parameters or variables. It is shown that a single first-order Hamilton-Jacobi ''parametrical'' PDE suffices to derive the various neurodynamical paradigms used today. In the case that weights are taken as variables, a new type of neurodynamics is discovered: A Hamilton function is derived so that the weights obey a second-order ordinary differential equation (ODE). As this ODE models the forces, experienced by the weights in the presence of some generalized error potential, it is called a learning law. Results obtained for the association of time-varying patterns, using parametrical as well as dynamical weights, show that learning rules can be replaced by learning laws at equal performance.