Taking advice from intelligent systems: the double-edged sword of explanations

  • Authors:
  • Kate Ehrlich;Susanna E. Kirk;John Patterson;Jamie C. Rasmussen;Steven I. Ross;Daniel M. Gruen

  • Affiliations:
  • IBM, Cambridge, MA, USA;IBM, Cambridge, MA, USA;IBM, Cambridge, MA, USA;IBM, Cambridge, MA, USA;IBM, Cambridge, MA, USA;IBM, Cambridge, MA, USA

  • Venue:
  • Proceedings of the 16th international conference on Intelligent user interfaces
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Research on intelligent systems has emphasized the benefits of providing explanations along with recommendations. But can explanations lead users to make incorrect decisions? We explored this question in a controlled experimental study with 18 professional network security analysts doing an incident classification task using a prototype cybersecurity system. The system provided three recommendations on each trial. The recommendations were displayed with explanations (called "justifications") or without. On half the trials, one of the recommendations was correct; in the other half none of the recommendations was correct. Users were more accurate with correct recommendations. Although there was no benefit overall of explanation, we found that a segment of the analysts were more accurate with explanations when a correct choice was available but were less accurate with explanations in the absence of a correct choice. We discuss implications of these results for the design of intelligent systems.