Not So Naive Bayes: Aggregating One-Dependence Estimators
Machine Learning
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Extracting social meaning: identifying interactional style in spoken conversation
NAACL '09 Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Automatic Recognition of the Function of Singular Neuter Pronouns in Texts and Spoken Data
DAARC '09 Proceedings of the 7th Discourse Anaphora and Anaphor Resolution Colloquium on Anaphora Processing and Applications
It's not you, it's me: detecting flirting and its misperception in speed-dates
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1
Classification of feedback expressions in multimodal data
ACLShort '10 Proceedings of the ACL 2010 Conference Short Papers
UAHCI'11 Proceedings of the 6th international conference on Universal access in human-computer interaction: users diversity - Volume Part II
Detecting friendly, flirtatious, awkward, and assertive speech in speed-dates
Computer Speech and Language
Resources for turn competition in overlapping talk
Speech Communication
Hi-index | 0.00 |
This paper deals with speech overlaps in dyadic video record-ed spontaneous conversations. Speech overlaps are quite common in everyday conversations and it is therefore important to study their occurrences in different communicative situations and settings and to model them in applied communicative systems. In the present work, we wanted to investigate the frequency and use of speech overlaps in a multimodally annotated corpus of first encounters. Speech overlaps were automatically tagged and a Bayesian Network learner was trained on the multimodal annotations in order to determine to which extent overlaps can be predicted so they can be dealt with in conversational devices and to investigate the relation between overlaps, speech tokens and co-occurring body behaviours. The annotations comprise shape and functions of head movements, facial expressions and body postures. 23% of the speech tokens and 90% of the spoken contributions of the first encounters are overlapping. The best classification results were obtained training the classifier on multimodal behaviours (speech and co-occurring head movements, facial expressions and body postures) which surround-ed the overlaps. Training the classifier on all speech tokens also gave good results while adding the shape of co-occurring body behaviours to them did not affect the results. Thus, the behaviours of the conversation participants does not change when there is a speech overlap. This could indicate that most of the overlaps in the first encounters are non competitive.