Interpretable Piecewise Linear Classifier

  • Authors:
  • Pitoyo Hartono

  • Affiliations:
  • Department of Media Architecture, Future University-Hakodate, Hakodate, Japan

  • Venue:
  • Neural Information Processing
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

The objective of this study is to build a model of neural network classifier that is not only reliable but also, as opposed to most presently available neural networks, logically interpretable in a human-plausible manner. Presently, most of the studies of rule extraction from trained neural networks focus on extracting rule from existing neural network models that were designed without the consideration of rule extraction, hence after the training process they are meant to be used as a kind black box. Consequently, this makes rule extraction a hard task. In this study we construct a model of neural network ensemble with the consideration of rule extraction. The function of the ensemble can be easily interpreted to generate logical rules that are understandable to human. We believe that the interpretability of neural networks contributes to the improvement of the reliability and the usability of neural networks when applied critical real world problems.