Granular neural networks and their development through context-based clustering and adjustable dimensionality of receptive fields

  • Authors:
  • Ho-Sung Park;Witold Pedrycz;Sung-Kwun Oh

  • Affiliations:
  • Industry Administration Institute, The University of Suwon, Gyeonggi-do, Korea;Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada and System Research Institute, Polish Academy of Sciences, Warsaw, Poland;Department of Electrical Engineering, The University of Suwon, Gyeonggi-do, Korea

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this study, we present a new architecture of a granular neural network and provide a comprehensive design methodology as well as elaborate on an algorithmic setup supporting its development. The proposed neural network relates to a broad category of radial basis function neural networks (RBFNNs) in the sense that its topology involves a collection of receptive fields. In contrast to the standard architectures encountered in RBFNNs, here we form individual receptive fields in subspaces of the original input space rather than in the entire input space. These subspaces could be different for different receptive fields. The architecture of the network is fully reflective of the structure encountered in the training data which are granulated with the aid of clustering techniques. More specifically, the output space is granulated with use of K-means clustering while the information granules in the multidimensional input space are formed by using the so-called context-based fuzzy C-means, which takes into account the structure being already formed in the output space. The innovative development facet of the network involves a dynamic reduction of dimensionality of the input space in which the information granules are formed in the subspace of the overall input space which is formed by selecting a suitable subset of input variables so that this subspace retains the structure of the entire space. As this search is of combinatorial character, we use the technique of genetic optimization [genetic algorithms (GAs), to be more specific] to determine the optimal input subspaces. A series of numeric studies exploiting synthetic data and data coming from the Machine Learning Repository, University of California at Irvine, provide a detailed insight into the nature of the algorithm and its parameters as well as offer some comparative analysis.