Applied & computational complex analysis: power series integration conformal mapping location of zero
Universal approximation using radial-basis-function networks
Neural Computation
How receptive field parameters affect neural learning
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Approximation and radial-basis-function networks
Neural Computation
VC dimension and uniform learnability of sparse polynomials and rational functions
SIAM Journal on Computing
Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks
Journal of Computer and System Sciences - Special issue: dedicated to the memory of Paris Kanellakis
Neural networks with quadratic VC dimension
Journal of Computer and System Sciences - Special issue: dedicated to the memory of Paris Kanellakis
Radial basis function networks 1
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Neural networks with local receptive fields and superlinear VC Dimension
Neural Computation
Learning Deep Architectures for AI
Foundations and Trends® in Machine Learning
Hi-index | 0.00 |
We establish versions of Descartes' rule of signs for radial basis function (RBF) neural networks. The RBF rules of signs provide tight bounds for the number of zeros of univariate networks with certain parameter restrictions. Moreover, they can be used to infer that the Vapnik-Chervonenkis (VC) dimension and pseudodimension of these networks are no more than linear. This contrasts with previous work showing that RBF neural networks with two or more input nodes have superlinear VC dimension. The rules also give rise to lower bounds for network sizes, thus demonstrating the relevance of network parameters for the complexity of computing with RBF neural networks.