WordNet: a lexical database for English
Communications of the ACM
Assessing agreement on classification tasks: the kappa statistic
Computational Linguistics
Meeting Analysis: Findings from Research and Practice
HICSS '01 Proceedings of the 34th Annual Hawaii International Conference on System Sciences ( HICSS-34)-Volume 1 - Volume 1
Detecting action-items in e-mail
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
Segmenting meetings into agenda items by extracting implicit supervision from human note-taking
Proceedings of the 12th international conference on Intelligent user interfaces
SmartNotes: implicit labeling of meeting data through user note-taking and browsing
NAACL-Demonstrations '06 Proceedings of the 2006 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology: companion volume: demonstrations
Automatically detecting action items in audio meeting recordings
SigDIAL '06 Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue
Accessing multimodal meeting data: systems, problems and possibilities
MLMI'04 Proceedings of the First international conference on Machine Learning for Multimodal Interaction
Detecting action items in multi-party meetings: annotation and initial experiments
MLMI'06 Proceedings of the Third international conference on Machine Learning for Multimodal Interaction
Markup as you talk: establishing effective memory cues while still contributing to a meeting
Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work
Hi-index | 0.00 |
In [1,2], we presented a method for automatic detection of action items from natural conversation. This method relies on supervised classification techniques that are trained on data annotated according to a hierarchical notion of dialogue structure; data which are expensive and time-consuming to produce. In [3], we presented a meeting browser which allows users to view a set of automatically-produced action item summaries and give feedback on their accuracy. In this paper, we investigate methods of using this kind of feedback as implicit supervision, in order to bypass the costly annotation process and enable machine learning through use. We investigate, through the transformation of human annotations into hypothetical idealized user interactions, the relative utility of various modes of user interaction and techniques for their interpretation. We show that performance improvements are possible, even with interfaces that demand very little of their users' attention.