Learning translation invariant recognition in massively parallel networks
Volume I: Parallel architectures on PARLE: Parallel Architectures and Languages Europe
Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
The recurrent cascade-correlation architecture
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Neural networks and the bias/variance dilemma
Neural Computation
Regularization theory and neural networks architectures
Neural Computation
Training with noise is equivalent to Tikhonov regularization
Neural Computation
On the practical applicability of VC dimension bounds
Neural Computation
The dynamic universality of sigmoidal neural networks
Information and Computation
Efficient training of recurrent neural network with time delays
Neural Networks
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Generalization of Elman Networks
ICANN '97 Proceedings of the 7th International Conference on Artificial Neural Networks
Combining Regularized Neural Networks
ICANN '97 Proceedings of the 7th International Conference on Artificial Neural Networks
A Double Gradient Algorithm to Optimize Regularization
ICANN '97 Proceedings of the 7th International Conference on Artificial Neural Networks
A smoothing regularizer for feedforward and recurrent neural networks
Neural Computation
Sample complexity for learning recurrent perceptron mappings
IEEE Transactions on Information Theory
Dynamic structure neural networks for stable adaptive control of nonlinear systems
IEEE Transactions on Neural Networks
Structure optimization of neural networks with the A*-algorithm
IEEE Transactions on Neural Networks
Fast training of recurrent networks based on the EM algorithm
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Learning continuous trajectories in recurrent neural networks with time-dependent weights
IEEE Transactions on Neural Networks
Two regularizers for recursive least squared algorithms in feedforward multilayered neural networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Gradient calculations for dynamic recurrent neural networks: a survey
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This work addresses the problem of improving the generalization capabilities of continuous recurrent neural networks. The learning task is transformed into an optimal control framework in which the weights and the initial network state are treated as unknown controls. A new learning algorithm based on a variational formulation of Pontrayagin's maximum principle is proposed. Under reasonable assumptions, its convergence is discussed. Numerical examples are given that demonstrate an essential improvement of generalization capabilities after the learning process of a dynamic network.