On the universal approximation theorem of fuzzy neural networks with random membership function parameters

  • Authors:
  • Lipo Wang;Bing Liu;Chunru Wan

  • Affiliations:
  • School of Electrical and Electronic Engineering, Nanyang Technology University, Singapore;School of Electrical and Electronic Engineering, Nanyang Technology University, Singapore;School of Electrical and Electronic Engineering, Nanyang Technology University, Singapore

  • Venue:
  • ISNN'05 Proceedings of the Second international conference on Advances in Neural Networks - Volume Part I
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Lowe [1] proposed that the kernel parameters of a radial basis function (RBF) neural network may first be fixed and the weights of the output layer can then be determined by pseudo-inverse. Jang, Sun, and Mizutani (p.342 [2]) pointed out that this type of two-step training methods can also be used in fuzzy neural networks (FNNs). By extensive computer simulations, we [3] demonstrated that an FNN with randomly fixed membership function parameters (FNN-RM) has faster training and better generalization in comparison to the classical FNN. To provide a theoretical basis for the FNN-RM, we present an intuitive proof of the universal approximation ability of the FNN-RM in this paper, based on the orthogonal set theory proposed by Kaminski and Strumillo for RBF neural networks [4].