Eigenvalues and s-numbers
Universal approximation using radial-basis-function networks
Neural Computation
Approximation and radial-basis-function networks
Neural Computation
Rate of approximation results motivated by robust neural network learning
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Approximation and learning of convex superpositions
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Feedforward Neural Network Methodology
Feedforward Neural Network Methodology
On the tractability of multivariate integration and approximation by neural networks
Journal of Complexity
Error Estimates for Approximate Optimization by the Extended Ritz Method
SIAM Journal on Optimization
Journal of Approximation Theory
Minimization of error functionals over perceptron networks
Neural Computation
Complexity of Gaussian-radial-basis networks approximating smooth functions
Journal of Complexity
An integral upper bound for neural network approximation
Neural Computation
Model Complexity of Neural Networks and Integral Transforms
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
Weighted quadrature formulas and approximation by zonal function networks on the sphere
Journal of Complexity
Comparison of worst case errors in linear and neural network approximation
IEEE Transactions on Information Theory
Geometric Upper Bounds on Rates of Variable-Basis Approximation
IEEE Transactions on Information Theory
Universal approximation bounds for superpositions of a sigmoidal function
IEEE Transactions on Information Theory
On the geometric convergence of neural approximations
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Integral transforms with kernels corresponding to computational units are exploited to derive estimates of network complexity. The estimates are obtained by combining tools from nonlinear approximation theory and functional analysis together with representations of functions in the form of infinite neural networks. The results are applied to perceptron networks.