Resolving pronominal reference to abstract entities
Resolving pronominal reference to abstract entities
Identifying the addressee in human-human-robot interactions based on head pose and speech
Proceedings of the 6th international conference on Multimodal interfaces
A machine learning approach to pronoun resolution in spoken dialogue
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Disambiguating between generic and referential "you" in dialog
ACL '07 Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions
SIGDIAL '09 Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Cascaded lexicalised classifiers for second-person reference resolution
SIGDIAL '09 Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue
The CALO meeting assistant system
IEEE Transactions on Audio, Speech, and Language Processing
Annotating participant reference in English spoken conversation
LAW IV '10 Proceedings of the Fourth Linguistic Annotation Workshop
Hand gestures in disambiguating types of you expressions in multiparty meetings
SIGDIAL '10 Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Identifying utterances addressed to an agent in multiparty human-agent conversations
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Addressee identification for human-human-agent multiparty conversations in different proxemics
Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction
Hi-index | 0.00 |
We explore the problem of resolving the second person English pronoun you in multi-party dialogue, using a combination of linguistic and visual features. First, we distinguish generic and referential uses, then we classify the referential uses as either plural or singular, and finally, for the latter cases, we identify the addressee. In our first set of experiments, the linguistic and visual features are derived from manual transcriptions and annotations, but in the second set, they are generated through entirely automatic means. Results show that a multimodal system is often preferable to a unimodal one.