Communications of the ACM
Learnability with respect to fixed distributions
Theoretical Computer Science
Dominating distributions and learnability
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Approximation and Estimation Bounds for Artificial Neural Networks
Machine Learning - Special issue on computational learning theory
On learning of sigmoid neural networks
Complexity
A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems
A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems
Learning-Based Complexity Evaluation of Radial Basis Function Networks
Neural Processing Letters
IEEE Transactions on Information Theory
Hi-index | 0.00 |
The PAC learning theory creates a framework to assess the learning properties of static models. This theory has been extended to include learning of modeling tasks with m-dependent data given that the data are distributed according to a uniform distribution. The extended theory can be applied for learning of nonlinear FIR models with the restriction that the data are unformly distributed.In this paper, The PAC learning scheme is extended to deal with any FIR model regardless of the distribution of the data. This fixed-distribution m-dependent extension of the PAC learning theory is then applied to the learning of FIR three-layer feedforward sigmoid neural networks.