End-user feature labeling: a locally-weighted regression approach
Proceedings of the 16th international conference on Intelligent user interfaces
Where are my intelligent assistant's mistakes? a systematic testing approach
IS-EUD'11 Proceedings of the Third international conference on End-user development
An explanation-centric approach for personalizing intelligent agents
Proceedings of the 2012 ACM international conference on Intelligent User Interfaces
Tell me more?: the effects of mental model soundness on personalizing an intelligent agent
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
End-user interactions with intelligent and autonomous systems
CHI '12 Extended Abstracts on Human Factors in Computing Systems
The effect of explanations on perceived control and behaviors in intelligent systems
CHI '13 Extended Abstracts on Human Factors in Computing Systems
Hi-index | 0.00 |
Many machine-learning algorithms learn rules of behavior from individual end users, such as task-oriented desktop organizers and handwriting recognizers. These rules form a “program” that tells the computer what to do when future inputs arrive. Little research has explored how an end user can debug these programs when they make mistakes. We present our progress toward enabling end users to debug these learned programs via a Natural Programming methodology. We began with a formative study exploring how users reason about and correct a text-classification program. From the results, we derived and prototyped a concept based on “explanatory debugging”, then empirically evaluated it. Our results contribute methods for exposing a learned program’s logic to end users and for eliciting user corrections to improve the program’s predictions.