On the Sample Complexity for Nonoverlapping Neural Networks
Machine Learning
Neural networks with local receptive fields and superlinear VC Dimension
Neural Computation
Structural Complexity and Neural Networks
WIRN VIETRI 2002 Proceedings of the 13th Italian Workshop on Neural Nets-Revised Papers
Product Unit Neural Networks with Constant Depth and Superlinear VC Dimension
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
On the Sample Complexity for Neural Trees
ALT '98 Proceedings of the 9th International Conference on Algorithmic Learning Theory
Radial Basis Function Neural Networks Have Superlinear VC Dimension
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Almost Linear VC-Dimension Bounds for Piecewise Polynomial Networks
Neural Computation
Lower bounds for the computational power of networks of spiking neurons
Neural Computation
Hi-index | 0.00 |
It has been known for quite a while that the Vapnik-Chervonenkisdimension (VC-dimension) of a feedforward neural net with linearthreshold gates is at most O(w · logw), where w is the total number of weights in theneural net. We show in this paper that this bound is in factasymptotically optimal. More precisely, we exhibit for any depthd ≥ 3 a large class of feedforward neural nets of depthd with w weights that have VC-dimensionΩ(w · log w). This lower bound holdseven if the inputs are restricted to Boolean values. The proof ofthis result relies on a new method that allows us to encode moreprogram-bits in the weights of a neural net than previously thoughtpossible.