Where are my intelligent assistant's mistakes? a systematic testing approach

  • Authors:
  • Todd Kulesza;Margaret Burnett;Simone Stumpf;Weng-Keen Wong;Shubhomoy Das;Alex Groce;Amber Shinsel;Forrest Bice;Kevin McIntosh

  • Affiliations:
  • School of EECS, Kelley Engr. Center, Oregon State University, Corvallis, OR;School of EECS, Kelley Engr. Center, Oregon State University, Corvallis, OR;Centre for HCI Design, City University London, Northampton Square, London;School of EECS, Kelley Engr. Center, Oregon State University, Corvallis, OR;School of EECS, Kelley Engr. Center, Oregon State University, Corvallis, OR;School of EECS, Kelley Engr. Center, Oregon State University, Corvallis, OR;School of EECS, Kelley Engr. Center, Oregon State University, Corvallis, OR;School of EECS, Kelley Engr. Center, Oregon State University, Corvallis, OR;School of EECS, Kelley Engr. Center, Oregon State University, Corvallis, OR

  • Venue:
  • IS-EUD'11 Proceedings of the Third international conference on End-user development
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Intelligent assistants are handling increasingly critical tasks, but until now, end users have had no way to systematically assess where their assistants make mistakes. For some intelligent assistants, this is a serious problem: if the assistant is doing work that is important, such as assisting with qualitative research or monitoring an elderly parent's safety, the user may pay a high cost for unnoticed mistakes. This paper addresses the problem with WYSIWYT/ML (What You See Is What You Test for Machine Learning), a human/computer partnership that enables end users to systematically test intelligent assistants. Our empirical evaluation shows that WYSIWYT/ML helped end users find assistants' mistakes significantly more effectively than ad hoc testing. Not only did it allow users to assess an assistant's work on an average of 117 predictions in only 10 minutes, it also scaled to a much larger data set, assessing an assistant's work on 623 out of 1,448 predictions using only the users' original 10 minutes' testing effort.