Hybrid fuzzy set-based polynomial neural networks and their development with the aid of genetic optimization and information granulation

  • Authors:
  • Sung-Kwun Oh;Witold Pedrycz;Seok-Beom Roh

  • Affiliations:
  • Department of Electrical Engineering, The University of Suwon, San 2-2, Wau-ri, Bongdam-eup, Hwaseong-si, Gyeonggi-do, 445-743, South Korea;Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2G6, Canada and Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland;Department of Electrical Electronic and Information Engineering, Wonkwang University, 344-2, Shinyong-Dong, Iksan, Chon-Buk, 570-749, South Korea

  • Venue:
  • Applied Soft Computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.01

Visualization

Abstract

We introduce a new architecture of feed-forward neural networks called hybrid fuzzy set-based polynomial neural networks (HFSPNNs) that are composed of heterogeneous feed-forward neural networks such as polynomial neural networks (PNNs) and fuzzy set-based polynomial neural networks (FSPNNs). We develop their comprehensive design methodology by embracing mechanisms of genetic optimization and information granulation. The construction of information granulation-driven HFSPNN exploits fundamental technologies of computational intelligence (CI), namely fuzzy sets, neural networks, and genetic algorithms (GAs). The architecture of the resulting information granulation-driven genetically optimized HFSPNN results from a synergistic usage of the hybrid system generated by combining original fuzzy set-based polynomial neurons (FSPNs)-based FSPNN with polynomial neurons (PNs)-based PNN. The design of the conventional genetically optimized HFPNN exploits the extended Group Method of Data Handling (GMDH) whose some essential parameters of the network being tuned with the use of genetic algorithms throughout the overall development process. Two general optimization mechanisms are explored. First, the structural optimization is realized via GAs while the ensuing detailed parametric optimization is carried out in the setting of a standard least square method-based learning. The performance of the gHFSPNN is quantified through extensive experimentation where we considered a number of modeling benchmarks (synthetic and experimental data already experimented with in fuzzy or neurofuzzy modeling).