Improving rule extraction from neural networks by modifying hidden layer representations

  • Authors:
  • Thuan Q. Huynh;James A. Reggia

  • Affiliations:
  • Department of Computer Science, University of Maryland, College Park, Maryland;Department of Computer Science, University of Maryland, College Park, Maryland

  • Venue:
  • IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes a new method for extracting symbolic rules from multilayer feedforward neural networks. Our approach is to encourage backpropagation to learn a sparser representation at the hidden layer and to use the improved representation to extract fewer, easier to understand rules. A new error term defined over the hidden layer is added to the standard sum of squared error so that the total squared distance between hidden activation vectors is increased. We show that this method helps extract fewer rules without decreasing classification accuracy in four publicly available data sets.