Segmenting meetings into agenda items by extracting implicit supervision from human note-taking
Proceedings of the 12th international conference on Intelligent user interfaces
Correlation between ROUGE and human evaluation of extractive meeting summaries
HLT-Short '08 Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers
A skip-chain conditional random field for ranking meeting utterances by importance
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Implicitly supervised language model adaptation for meeting transcription
NAACL-Short '07 Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
Participants' personal note-taking in meetings and its value for automatic meeting summarisation
Information Technology and Management
Hi-index | 0.00 |
Our goal is to make note-taking easier in meetings by automatically detecting noteworthy utterances in verbal exchanges and suggesting them to meeting participants for inclusion in their notes. To show feasibility of such a process we conducted a Wizard of Oz study where the Wizard picked automatically transcribed utterances that he judged as noteworthy, and suggested their contents to the participants as notes. Over 9 meetings, participants accepted 35% of these suggestions. Further, 41.5% of their notes at the end of the meeting contained Wizard-suggested text. Next, in order to perform noteworthiness detection automatically, we annotated a set of 6 meetings with a 3-level noteworthiness annotation scheme, which is a break from the binary "in summary"/"not in summary" labeling typically used in speech summarization. We report Kappa of 0.44 for the 3-way classification, and 0.58 when two of the 3 labels are merged into one. Finally, we trained an SVM classifier on this annotated data; this classifier's performance lies between that of trivial baselines and inter-annotator agreement.