Multilayer feedforward networks are universal approximators
Neural Networks
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
A better approximation for balls
Journal of Approximation Theory
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Universal approximation bounds for superpositions of a sigmoidal function
IEEE Transactions on Information Theory
Sup-norm approximation bounds for networks through probabilistic methods
IEEE Transactions on Information Theory
Hi-index | 0.00 |
Accurate and parsimonious approximations for indicator functions of d-dimensional balls and related functions are given using level sets associated with the thresholding of a linear combination of ramp sigmoid activation functions. In neural network terminology, we are using a single-hidden-layer perceptron network implementing the ramp sigmoid activation function to approximate the indicator of a ball. In order to have a relative accuracy @e, we use T=c(d^2/@e^2) ramp sigmoids, a result comparable to that of Cheang and Barron (2000) [4], where unit step activation functions are used instead. The result is then applied to functions that have variation V"f with respect to a class of ellipsoids. Two-hidden-layer feedforward neural nets with ramp sigmoid activation functions are used to approximate such functions. The approximation error is shown to be bounded by a constant times V"f/T"1^1^2+V"fd/T"2^1^4, where T"1 is the number of nodes in the outer layer and T"2 is the number of nodes in the inner layer of the approximation f"T"""1","T"""2.