Generalization Ability of Folding Networks
IEEE Transactions on Knowledge and Data Engineering
On the Generalization Ability of Recurrent Networks
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
Improving generalization capabilities of dynamic neural networks
Neural Computation
Neural Systems as Nonlinear Filters
Neural Computation
Hi-index | 754.84 |
Recurrent perceptron classifiers generalize the usual perceptron model. They correspond to linear transformations of input vectors obtained by means of “autoregressive moving-average schemes”, or infinite impulse response filters, and take into account those correlations and dependences among input coordinates which arise from linear digital filtering. This paper provides tight bounds on the sample complexity associated to the fitting of such models to experimental data. The results are expressed in the context of the theory of probably approximately correct (PAC) learning