Fixing the program my computer learned: barriers for end users, challenges for the machine

  • Authors:
  • Todd Kulesza;Weng-Keen Wong;Simone Stumpf;Stephen Perona;Rachel White;Margaret M. Burnett;Ian Oberst;Andrew J. Ko

  • Affiliations:
  • Oregon State University, Corvallis, OR, USA;Oregon State University, Corvallis, OR, USA;Oregon State University, Corvallis, OR, USA;Oregon State University, Corvallis, OR, USA;Oregon State University, Corvallis, OR, USA;Oregon State University, Corvallis, OR, USA;Oregon State University, Corvallis, OR, USA;University of Washington, Seattle, WA, USA

  • Venue:
  • Proceedings of the 14th international conference on Intelligent user interfaces
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The results of a machine learning from user behavior can be thought of as a program, and like all programs, it may need to be debugged. Providing ways for the user to debug it matters, because without the ability to fix errors users may find that the learned program's errors are too damaging for them to be able to trust such programs. We present a new approach to enable end users to debug a learned program. We then use an early prototype of our new approach to conduct a formative study to determine where and when debugging issues arise, both in general and also separately for males and females. The results suggest opportunities to make machine-learned programs more effective tools.