Explanatory Debugging: Supporting End-User Debugging of Machine-Learned Programs

  • Authors:
  • Todd Kulesza;Simone Stumpf;Margaret Burnett;Weng-Keen Wong;Yann Riche;Travis Moore;Ian Oberst;Amber Shinsel;Kevin McIntosh

  • Affiliations:
  • -;-;-;-;-;-;-;-;-

  • Venue:
  • VLHCC '10 Proceedings of the 2010 IEEE Symposium on Visual Languages and Human-Centric Computing
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many machine-learning algorithms learn rules of behavior from individual end users, such as task-oriented desktop organizers and handwriting recognizers. These rules form a “program” that tells the computer what to do when future inputs arrive. Little research has explored how an end user can debug these programs when they make mistakes. We present our progress toward enabling end users to debug these learned programs via a Natural Programming methodology. We began with a formative study exploring how users reason about and correct a text-classification program. From the results, we derived and prototyped a concept based on “explanatory debugging”, then empirically evaluated it. Our results contribute methods for exposing a learned program’s logic to end users and for eliciting user corrections to improve the program’s predictions.