Reorganizing knowledge in neural networks: an explanatory mechanism for neural networks in data classification problems

  • Authors:
  • H. Narazaki;T. Watanabe;M. Yamamoto

  • Affiliations:
  • Process Technol. Res. Lab., Kobe Steel Ltd.;-;-

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose an explanatory mechanism for multilayered neural networks (NN). In spite of the effective learning capability as a uniform function approximator, the multilayered NN suffers from unreadability, i.e., it is difficult for the user to interpret or understand the “knowledge” that the NN has by looking at the connection weights and thresholds obtained by backpropagation (BP). This unreadability comes from the distributed nature of the knowledge representation in the NN. In this paper, we propose a method that reorganizes the distributed knowledge in the NN to extract approximate classification rules. Our rule extraction method is based on the analysis of the function that the NN has learned, rather than on the direct interpretation of connection weights as correlation information. More specifically, our method divides the input space into “monotonic regions” where a monotonic region is a set of input patterns that belongs to the same class with the same sensitivity pattern. Approximate classification rules are generated by projecting these monotonic regions