Some new results on neural network approximation

  • Authors:
  • K. Hornik

  • Affiliations:
  • Technische Universität Wien, Austria

  • Venue:
  • Neural Networks
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

We show that standard feedforward networks with as few as a single hidden layer can uniformly approximate continuous functions on compacta provided that the activation function @j is locally Riemann integrable and nonpolynomial, and have universal L^p (@m) approximation capabilities for finite and compactly supported input environment measures @m provided that @j is locally bounded and nonpolynomial. In both cases, the input-to-hidden weights and hidden layer biases can be constrained to arbitrarily small sets; if in addition @j is locally analytic a single universal bias will do.