Exploring new possibilities for case-based explanation of artificial neural network ensembles

  • Authors:
  • Michael Green;Ulf Ekelund;Lars Edenbrandt;Jonas Björk;Jakob Lundager Forberg;Mattias Ohlsson

  • Affiliations:
  • Computational Biology and Biological Physics Group, Department of Theoretical Physics, Lund University, Sölvegatan 14A, SE-223 62 Lund, Sweden;Department of Emergency Medicine, Lund University Hospital, SE-221 85 Lund, Sweden;Department of Clinical Physiology, Malmö University Hospital, SE-205 02 Malmö, Sweden;Competence Centre for Clinical Research, Lund University Hospital, SE-221 85 Lund, Sweden;Department of Emergency Medicine, Lund University Hospital, SE-221 85 Lund, Sweden;Computational Biology and Biological Physics Group, Department of Theoretical Physics, Lund University, Sölvegatan 14A, SE-223 62 Lund, Sweden

  • Venue:
  • Neural Networks
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Artificial neural network (ANN) ensembles have long suffered from a lack of interpretability. This has severely limited the practical usability of ANNs in settings where an erroneous decision can be disastrous. Several attempts have been made to alleviate this problem. Many of them are based on decomposing the decision boundary of the ANN into a set of rules. We explore and compare a set of new methods for this explanation process on two artificial data sets (Monks 1 and 3), and one acute coronary syndrome data set consisting of 861 electrocardiograms (ECG) collected retrospectively at the emergency department at Lund University Hospital. The algorithms managed to extract good explanations in more than 84% of the cases. More to the point, the best method provided 99% and 91% good explanations in Monks data 1 and 3 respectively. Also there was a significant overlap between the algorithms. Furthermore, when explaining a given ECG, the overlap between this method and one of the physicians was the same as the one between the two physicians in this study. Still the physicians were significantly, p-value