Communications of the ACM
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
What size net gives valid generalization?
Neural Computation
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
Finiteness results for sigmoidal “neural” networks
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Bounding the Vapnik-Chervonenkis dimension of concept classes parameterized by real numbers
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Polynomial bounds for VC dimension of sigmoidal neural networks
STOC '95 Proceedings of the twenty-seventh annual ACM symposium on Theory of computing
Vapnik-chervonenkis dimension bounds for two-and three-layer networks
Neural Computation
Function Estimation by Feedforward Sigmoidal Networks with BoundedWeights
Neural Processing Letters
Neural networks with local receptive fields and superlinear VC Dimension
Neural Computation
RBF Neural Networks and Descartes' Rule of Signs
ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
Descartes' rule of signs for radial basis function neural networks
Neural Computation
Hi-index | 0.00 |
We give upper bounds on the Vapnik-Chervonenkis dimension and pseudodimension of two-layer neural networks that use the standard sigmoid function or radial basis function and have inputs from {-D,...,D}n. In Valiant's probably approximately correct (pac) learning framework for pattern classification, and in Haussler's generalization of this framework to nonlinear regression, the results imply that the number of training examples necessary for satisfactory learning performance grows no more rapidly than W log (WD), where W is the number of weights. The previous best bound for these networks was O(W4).