Why-oriented end-user debugging of naive Bayes text classification

  • Authors:
  • Todd Kulesza;Simone Stumpf;Weng-Keen Wong;Margaret M. Burnett;Stephen Perona;Andrew Ko;Ian Oberst

  • Affiliations:
  • Oregon State University;City University London;Oregon State University;Oregon State University;Oregon State University;University of Washington;Oregon State University

  • Venue:
  • ACM Transactions on Interactive Intelligent Systems (TiiS)
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Machine learning techniques are increasingly used in intelligent assistants, that is, software targeted at and continuously adapting to assist end users with email, shopping, and other tasks. Examples include desktop SPAM filters, recommender systems, and handwriting recognition. Fixing such intelligent assistants when they learn incorrect behavior, however, has received only limited attention. To directly support end-user “debugging” of assistant behaviors learned via statistical machine learning, we present a Why-oriented approach which allows users to ask questions about how the assistant made its predictions, provides answers to these “why” questions, and allows users to interactively change these answers to debug the assistant's current and future predictions. To understand the strengths and weaknesses of this approach, we then conducted an exploratory study to investigate barriers that participants could encounter when debugging an intelligent assistant using our approach, and the information those participants requested to overcome these barriers. To help ensure the inclusiveness of our approach, we also explored how gender differences played a role in understanding barriers and information needs. We then used these results to consider opportunities for Why-oriented approaches to address user barriers and information needs.