Interacting meaningfully with machine learning systems: Three experiments

  • Authors:
  • Simone Stumpf;Vidya Rajaram;Lida Li;Weng-Keen Wong;Margaret Burnett;Thomas Dietterich;Erin Sullivan;Jonathan Herlocker

  • Affiliations:
  • Oregon State University, School of Electrical Engineering and Computer Science, Corvallis, OR 97331, USA;Oregon State University, School of Electrical Engineering and Computer Science, Corvallis, OR 97331, USA;Oregon State University, School of Electrical Engineering and Computer Science, Corvallis, OR 97331, USA;Oregon State University, School of Electrical Engineering and Computer Science, Corvallis, OR 97331, USA;Oregon State University, School of Electrical Engineering and Computer Science, Corvallis, OR 97331, USA;Oregon State University, School of Electrical Engineering and Computer Science, Corvallis, OR 97331, USA;Oregon State University, School of Electrical Engineering and Computer Science, Corvallis, OR 97331, USA;Oregon State University, School of Electrical Engineering and Computer Science, Corvallis, OR 97331, USA

  • Venue:
  • International Journal of Human-Computer Studies
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Although machine learning is becoming commonly used in today's software, there has been little research into how end users might interact with machine learning systems, beyond communicating simple ''right/wrong'' judgments. If the users themselves could work hand-in-hand with machine learning systems, the users' understanding and trust of the system could improve and the accuracy of learning systems could be improved as well. We conducted three experiments to understand the potential for rich interactions between users and machine learning systems. The first experiment was a think-aloud study that investigated users' willingness to interact with machine learning reasoning, and what kinds of feedback users might give to machine learning systems. We then investigated the viability of introducing such feedback into machine learning systems, specifically, how to incorporate some of these types of user feedback into machine learning systems, and what their impact was on the accuracy of the system. Taken together, the results of our experiments show that supporting rich interactions between users and machine learning systems is feasible for both user and machine. This shows the potential of rich human-computer collaboration via on-the-spot interactions as a promising direction for machine learning systems and users to collaboratively share intelligence.