Function approximation with spiked random networks

  • Authors:
  • E. Gelenbe;Zhi-Hong Mao;Yan-Da Li

  • Affiliations:
  • Sch. of Comput. Sci., Central Florida Univ., Orlando, FL;-;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

Examines the function approximation properties of the “random neural-network model” or GNN, The output of the GNN can be computed from the firing probabilities of selected neurons. We consider a feedforward bipolar GNN (BGNN) model which has both “positive and negative neurons” in the output layer, and prove that the BGNN is a universal function approximator. Specifically, for any f∈C([0,1]s) and any ε>0, we show that there exists a feedforward BGNN which approximates f uniformly with error less than ε. We also show that after some appropriate clamping operation on its output, the feedforward GNN is also a universal function approximator