Explaining the output of ensembles in medical decision support on a case by case basis

  • Authors:
  • Robert Wall;Pádraig Cunningham;Paul Walsh;Stephen Byrne

  • Affiliations:
  • Machine Learning Group, Computer Science Department, Trinity College, Dublin, Ireland;Machine Learning Group, Computer Science Department, Trinity College, Dublin, Ireland;Machine Learning Group, Computer Science Department, Trinity College, Dublin, Ireland;Machine Learning Group, Computer Science Department, Trinity College, Dublin, Ireland

  • Venue:
  • Artificial Intelligence in Medicine
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

The use of ensembles in machine learning (ML) has had a considerable impact in increasing the accuracy and stability of predictors. This increase in accuracy has come at the cost of comprehensibility as, by definition, an ensemble model is considerably more complex than its component models. This is of significance for decision support systems in medicine because of the reluctance to use models that are essentially black boxes. Work on making ensembles comprehensible has so far focused on global models that mirror the behaviour of the ensemble as closely as possible. With such global models there is a clear tradeoff between comprehensibility and fidelity. In this paper, we pursue another tack, looking at local comprehensibility where the output of the ensemble is explained on a case-by-case basis. We argue that this meets the requirements of medical decision support systems. The approach presented here identifies the ensemble members that best fit the case in question and presents the behaviour of these in explanation.