Granular computing in neural networks

  • Authors:
  • Scott Dick;Abraham Kandel

  • Affiliations:
  • University of South Florida, Department of Computer Science and Engineering, Tampa, FL;University of South Florida, Department of Computer Science and Engineering, Tampa, FL

  • Venue:
  • Granular computing
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

The basic premise of granular computing is that, by reducing precision in our model of a system, we can suppress minor details and focus on the most significant relationships in the system. In this chapter, we will test this premise by defining a granular neural network and testing it on the Iris data set. Our hypothesis is that the granular neural network will be able to learn the Iris data set, but not as accurately as a standard neural network. Our network is a novel neurofuzzy systems architecture called the linguistic neural network. The defining characteristic of this network is that all connection weights are linguistic variables, whose values are updated by adding linguistic hedges. We define two new hedges, whose semantics require a generalization of the standard definition of linguistic variables. These generalized linguistic variables lead naturally to a linguistic arithmetic, which we prove forms a vector space. The node functions of the linguistic neural network are defined in terms of this linguistic arithmetic. The learning method used for the network is a modified Backpropagation algorithm, with the original arithmetic operations replaced by their linguistic equivalents. In a simulation experiment, this granulated version of the multilayer perceptron achieved 90% accuracy on the Iris data set, using a coarse granulation. This result supports our hypothesis.