Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions

  • Authors:
  • Guang-Bin Huang;H. A. Babri

  • Affiliations:
  • Sch. of Electr. & Electron. Eng., Nanyang Technol. Univ.;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

It is well known that standard single-hidden layer feedforward networks (SLFNs) with at most N hidden neurons (including biases) can learn N distinct samples (xi,ti) with zero error, and the weights connecting the input neurons and the hidden neurons can be chosen “almost” arbitrarily. However, these results have been obtained for the case when the activation function for the hidden neurons is the signum function. This paper rigorously proves that standard single-hidden layer feedforward networks (SLFNs) with at most N hidden neurons and with any bounded nonlinear activation function which has a limit at one infinity can learn N distinct samples (xi,ti) with zero error. The previous method of arbitrarily choosing weights is not feasible for any SLFN. The proof of our result is constructive and thus gives a method to directly find the weights of the standard SLFNs with any such bounded nonlinear activation function as opposed to iterative training algorithms in the literature