Communications of the ACM
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
What size net gives valid generalization?
Neural Computation
Computational learning theory: an introduction
Computational learning theory: an introduction
Learning read-once formulas with queries
Journal of the ACM (JACM)
Lower Bound Methods and Separation Results for On-Line Learning Models
Machine Learning - Computational learning theory
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
Neural nets with superlinear VC-dimension
Neural Computation
Bounding the Vapnik-Chervonenkis Dimension of Concept Classes Parameterized by Real Numbers
Machine Learning - Special issue on COLT '93
Sample sizes for threshold networks with equivalences
Information and Computation
Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks
Journal of Computer and System Sciences - Special issue: dedicated to the memory of Paris Kanellakis
Neural networks with quadratic VC dimension
Journal of Computer and System Sciences - Special issue: dedicated to the memory of Paris Kanellakis
Vapnik-Chervonenkis dimension of neural networks
The handbook of brain theory and neural networks
On Learning µ-Perceptron Networks with Binary Weights
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Hi-index | 0.00 |
A neural tree is a feedforward neural network with at most one edge outgoing from each node. We investigate the number of examples that a learning algorithm needs when using neural trees as hypothesis class. We give bounds for this sample complexity in terms of the VC dimension. We consider trees consisting of threshold, sigmoidal and linear gates. In particular, we show that the class of threshold trees and the class of sigmoidal trees on n inputs both have VC dimension Ω(n log n). This bound is asymptotically tight for the class of threshold trees. We also present an upper bound for this class where the constants involved are considerably smaller than in a previous calculation. Finally, we argue that the VC dimension of threshold or sigmoidal trees cannot become larger by allowing the nodes to compute linear functions. This sheds some light on a recent result that exhibited neural networks with quadratic VC dimension.