Approximation capability of interpolation neural networks

  • Authors:
  • Feilong Cao;Shaobo Lin;Zongben Xu

  • Affiliations:
  • Department of Mathematics, China Jiliang University, Hangzhou 310018, Zhejiang Province, PR China;Institute for Information and System Sciences, Xi'an Jiaotong University, Xi'an 710049, Shannxi Province, PR China;Institute for Information and System Sciences, Xi'an Jiaotong University, Xi'an 710049, Shannxi Province, PR China

  • Venue:
  • Neurocomputing
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

It is well-known that single hidden layer feed-forward neural networks (SLFNs) with at most n hidden neurons can learn n distinct samples with zero error, and the weights connecting the input neurons and the hidden neurons and the hidden node thresholds can be chosen randomly. Namely, for n distinct samples, there exist SLFNs with n hidden neurons that interpolate them. These networks are called exact interpolation networks for the samples. However, for some approximated target functions (as continuous or integrable functions) not all exact interpolation networks have good approximation effect. This paper, by using a functional approach, rigorously proves that for given distinct samples there exists an SLFN which not only exactly interpolates samples but also near best approximates the target function.