Approximation and radial-basis-function networks
Neural Computation
Neural nets with superlinear VC-dimension
Neural Computation
Lower bounds on the VC dimension of smoothly parameterized function classes
Neural Computation
Lower bound on VC-dimension by local shattering
Neural Computation
Neural networks with quadratic VC dimension
Journal of Computer and System Sciences - Special issue: dedicated to the memory of Paris Kanellakis
Almost linear VC-dimension bounds for piecewise polynomial networks
Neural Computation
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Vision: A Computational Investigation into the Human Representation and Processing of Visual Information
Fast learning in networks of locally-tuned processing units
Neural Computation
Neural networks for optimal approximation of smooth and analytic functions
Neural Computation
RBF Neural Networks and Descartes' Rule of Signs
ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
Hi-index | 0.00 |
We establish superlinear lower bounds on the Vapnik-Chervonenkis (VC) dimension of neural networks with one hidden layer and local receptive field neurons. As the main result we show that every reasonably sized standard network of radial basis function (RBF) neurons has VC dimension Ω(W log k), where W is the number of parameters and k the number of nodes. This significantly improves the previously known linear bound. We also derive superlinear lower bounds for networks of discrete and continuous variants of center-surround neurons. The constants in all bounds are larger than those obtained thus far for sigmoidal neural networks with constant depth. The results have several implications with regard to the computational power and learning capabilities of neural networks with local receptive fields. In particular, they imply that the pseudo dimension and the fatshattering dimension of these networks is superlinear as well, and they yield lower bounds even when the input dimension is fixed. The methods developed here appear suitable for obtaining similar results for other kernel-based function classes.