Understanding neural networks via rule extraction

  • Authors:
  • Rudy Setiono;Huan Liu

  • Affiliations:
  • Department of Information Systems and Computer Science, National University of Singapore, Singapore;Department of Information Systems and Computer Science, National University of Singapore, Singapore

  • Venue:
  • IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

Although backpropagation neural networks generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions are not as interpretable as those of decision trees. This paper argues that this is because there has been no proper technique that enables us to do so. With an algorithm that can extract rules, by drawing parallels with those of decision trees, we show that the predictions of a network can be explained via rules extracted from it, thereby, the network can be understood. Experiments demonstrate that rules extracted from neural networks are comparable with those of decision trees in terms of predictive accuracy, number of rules and average number of conditions for a rule; they preserve high predictive accuracy of original networks.