Making customer-centered design work for teams
Communications of the ACM
Watch what I do: programming by demonstration
Watch what I do: programming by demonstration
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Getting more out of programming-by-demonstration
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Explaining collaborative filtering recommendations
CSCW '00 Proceedings of the 2000 ACM conference on Computer supported cooperative work
Interactive machine learning: letting users build classifiers
International Journal of Human-Computer Studies
Proceedings of the 8th international conference on Intelligent user interfaces
IEMS - The Intelligent Email Sorter
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Challenges of the Email Domain for Text Classification
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Improving proactive information systems
Proceedings of the 10th international conference on Intelligent user interfaces
Experiments in dynamic critiquing
Proceedings of the 10th international conference on Intelligent user interfaces
Task learning by instruction in tailor
Proceedings of the 10th international conference on Intelligent user interfaces
Garbage in, Garbage out? An Empirical Look at Oracle Mistakes by End-User Programmers
VLHCC '05 Proceedings of the 2005 IEEE Symposium on Visual Languages and Human-Centric Computing
A hybrid learning system for recognizing user tasks from desktop activities and email messages
Proceedings of the 11th international conference on Intelligent user interfaces
Augmentation-based learning: combining observations and user edits for programming-by-demonstration
Proceedings of the 11th international conference on Intelligent user interfaces
Tinkering and gender in end-user programmers' debugging
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Text clustering with extended user feedback
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Toward harnessing user feedback for machine learning
Proceedings of the 12th international conference on Intelligent user interfaces
Corrective feedback and persistent learning for information extraction
Artificial Intelligence
Text classification by labeling words
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
Semi-Supervised Learning
End user software engineering: chi'2008 special interest group meeting
CHI '08 Extended Abstracts on Human Factors in Computing Systems
Fixing the program my computer learned: barriers for end users, challenges for the machine
Proceedings of the 14th international conference on Intelligent user interfaces
Explaining how to play real-time strategy games
Knowledge-Based Systems
Data transformations and representations for computation and visualization
Information Visualization
Why-oriented end-user debugging of naive Bayes text classification
ACM Transactions on Interactive Intelligent Systems (TiiS)
Gender pluralism in problem-solving software
Interacting with Computers
End-user interactions with intelligent and autonomous systems
CHI '12 Extended Abstracts on Human Factors in Computing Systems
End-User Software Engineering and Why it Matters
Journal of Organizational and End User Computing
IUI workshop on interactive machine learning
Proceedings of the companion publication of the 2013 international conference on Intelligent user interfaces companion
Cobi: a community-informed conference scheduling tool
Proceedings of the 26th annual ACM symposium on User interface software and technology
Hi-index | 0.00 |
The potential for machine learning systems to improve via a mutually beneficial exchange of information with users has yet to be explored in much detail. Previously, we found that users were willing to provide a generous amount of rich feedback to machine learning systems, and that the types of some of this rich feedback seem promising for assimilation by machine learning algorithms. Following up on those findings, we ran an experiment to assess the viability of incorporating real-time keyword-based feedback in initial training phases when data is limited. We found that rich feedback improved accuracy but an initial unstable period often caused large fluctuations in classifier behavior. Participants were able to give feedback by relying heavily on system communication in order to respond to changes. The results show that in order to benefit from the user's knowledge, machine learning systems must be able to absorb keyword-based rich feedback in a graceful manner and provide clear explanations of their predictions.