Numerical continuation methods: an introduction
Numerical continuation methods: an introduction
Stable dynamic backpropagation learning in recurrent neural networks
IEEE Transactions on Neural Networks
Gradient descent learning algorithm overview: a general dynamical systems perspective
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Recurrent Backpropagation schemes for fixed point learning in continuous-time dynamic neural networks can be formalized through a differential-algebraic model, which in turn leads to singularly perturbed training techniques. Such models clarify the relative time-scaling between the network evolution and the adaptation dynamics, and allow for rigorous local convergence proofs. The present contribution addresses some related issues in a discrete-time context: fixed point problems can be analyzed in terms of iterations with different evolution rates, whereas periodic trajectory learning can be reduced to a multiple fixed point learning problem via Poincar茅 maps.