Neural networks and the bias/variance dilemma
Neural Computation
Regularization theory and neural networks architectures
Neural Computation
Training with noise is equivalent to Tikhonov regularization
Neural Computation
On the practical applicability of VC dimension bounds
Neural Computation
Efficient training of recurrent neural network with time delays
Neural Networks
A Double Gradient Algorithm to Optimize Regularization
ICANN '97 Proceedings of the 7th International Conference on Artificial Neural Networks
Learning continuous trajectories in recurrent neural networks with time-dependent weights
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Gradient calculations for dynamic recurrent neural networks: a survey
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This work addresses the problem of improving the generalization capabilities of continuous recurrent neural networks. The learning task is transformed into an optimal control framework in which the weights and the initial network state are treated as unknown controls. A new learning algorithm based on a variational formulation of Pontryagin's maximum principle is proposed. Numerical examples are also given which demonstrate an essential improvement of generalization capabilities after the learning process of a recurrent network.