Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Multimodal people ID for a multimedia meeting browser
MULTIMEDIA '99 Proceedings of the seventh ACM international conference on Multimedia (Part 1)
A machine learning approach to coreference resolution of noun phrases
Computational Linguistics - Special issue on computational anaphora resolution
Resolving pronominal reference to abstract entities
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Improving machine learning approaches to coreference resolution
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Multimodal identity tracking in a smart room
Personal and Ubiquitous Computing
Generating usable formats for metadata and annotations in a large meeting corpus
ACL '07 Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions
Gesture improves coreference resolution
NAACL-Short '06 Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers
VACE multimodal meeting corpus
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
A multimodal discourse ontology for meeting understanding
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
Multistream recognition of dialogue acts in meetings
MLMI'06 Proceedings of the Third international conference on Machine Learning for Multimodal Interaction
Modeling focus of attention for meeting indexing based on multiple cues
IEEE Transactions on Neural Networks
Hand gestures in disambiguating types of you expressions in multiparty meetings
SIGDIAL '10 Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Development of a taxonomy to improve human-robot-interaction through multimodal robot feedback
CHI '13 Extended Abstracts on Human Factors in Computing Systems
Hi-index | 0.00 |
During multiparty meetings, participants can use non-verbal modalities such as hand gestures to make reference to the shared environment. Therefore, one hypothesis is that incorporating hand gestures can improve coreference identification, a task that automatically identifies what participants refer to with their linguistic expressions. To evaluate this hypothesis, this paper examines the role of hand gestures in coreference identification, in particular, focusing on two questions: (1) what signals can distinguish communicative gestures that can potentially help coreference identification from non-communicative gestures; and (2) in what ways can communicative gestures help coreference identification. Based on the AMI data, our empirical results have shown that the length of gesture production is highly indicative of whether a gesture is communicative and potentially helpful in language understanding. Our experiments on the automated identification of coreferring expressions indicate that while the incorporation of simple gesture features does not improve overall performance, it does show potential on expressions referring to participants, an important and unique component of the meeting domain. A further analysis suggests that communicative gestures provide both redundant and complementary information, but further domain modeling and world knowledge incorporation is required to take full advantage of information that is complementary.