Rate of approximation results motivated by robust neural network learning
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Approximation and learning of convex superpositions
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Error Estimates for Approximate Optimization by the Extended Ritz Method
SIAM Journal on Optimization
Journal of Approximation Theory
Estimates of Network Complexity and Integral Representations
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Complexity of Gaussian-radial-basis networks approximating smooth functions
Journal of Complexity
Model Complexity of Neural Networks and Integral Transforms
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
Some comparisons of model complexity in linear and neural-network approximation
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
Hi-index | 0.00 |
Complexity of one-hidden-layer networks is studied using tools from nonlinear approximation and integration theory. For functions with suitable integral representations in the form of networks with infinitely many hidden units, upper bounds are derived on the speed of decrease of approximation error as the number of network units increases. These bounds are obtained for various norms using the framework of Bochner integration. Results are applied to perceptron networks.