Detection of agreement vs. disagreement in meetings: training with unlabeled data
NAACL-Short '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology: companion volume of the Proceedings of HLT-NAACL 2003--short papers - Volume 2
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Agreement/disagreement classification: exploiting unlabeled data using contrast classifiers
NAACL-Short '06 Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers
Agreement detection in multiparty conversation
Proceedings of the 2009 international conference on Multimodal interfaces
Enriching speech recognition with automatic detection of sentence boundaries and disfluencies
IEEE Transactions on Audio, Speech, and Language Processing
Annotation of adversarial and collegial social actions in discourse
LAW VI '12 Proceedings of the Sixth Linguistic Annotation Workshop
Unifying local and global agreement and disagreement classification in online debates
WASSA '12 Proceedings of the 3rd Workshop in Computational Approaches to Subjectivity and Sentiment Analysis
Hi-index | 0.00 |
We present Conditional Random Fields based approaches for detecting agreement/disagreement between speakers in English broadcast conversation shows. We develop annotation approaches for a variety of linguistic phenomena. Various lexical, structural, durational, and prosodic features are explored. We compare the performance when using features extracted from automatically generated annotations against that when using human annotations. We investigate the efficacy of adding prosodic features on top of lexical, structural, and durational features. Since the training data is highly imbalanced, we explore two sampling approaches, random downsampling and ensemble downsampling. Overall, our approach achieves 79.2% (precision), 50.5% (recall), 61.7% (F1) for agreement detection and 69.2% (precision), 46.9% (recall), and 55.9% (F1) for disagreement detection, on the English broadcast conversation data.