Presumptive Meanings: The Theory of Generalized Conversational Implicature
Presumptive Meanings: The Theory of Generalized Conversational Implicature
A plan-based analysis of indirect speech acts
Computational Linguistics
Interpreting and generating indirect answers
Computational Linguistics
A hybrid reasoning model for indirect answers
ACL '94 Proceedings of the 32nd annual meeting on Association for Computational Linguistics
Accurate unlexicalized parsing
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
EC '06 Proceedings of the 7th ACM conference on Electronic commerce
Get another label? improving data quality and data mining using multiple, noisy labelers
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Opinion Mining and Sentiment Analysis
Foundations and Trends in Information Retrieval
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Not a simple yes or no: uncertainty in indirect answers
SIGDIAL '09 Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Crowdsourcing and language studies: the new generation of linguistic data
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
Predicting the uncertainty of sentiment adjectives in indirect answers
Proceedings of the 20th ACM international conference on Information and knowledge management
Learning opinions in user-generated web content
Natural Language Engineering
Compositional matrix-space models for sentiment analysis
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
A method of feature selection and sentiment similarity for Chinese micro-blogs
Journal of Information Science
Hi-index | 0.00 |
Texts and dialogues often express information indirectly. For instance, speakers' answers to yes/no questions do not always straightforwardly convey a 'yes' or 'no' answer. The intended reply is clear in some cases (Was it good? It was great!) but uncertain in others (Was it acceptable? It was unprecedented.). In this paper, we present methods for interpreting the answers to questions like these which involve scalar modifiers. We show how to ground scalar modifier meaning based on data collected from the Web. We learn scales between modifiers and infer the extent to which a given answer conveys 'yes' or 'no'. To evaluate the methods, we collected examples of question-answer pairs involving scalar modifiers from CNN transcripts and the Dialog Act corpus and use response distributions from Mechanical Turk workers to assess the degree to which each answer conveys 'yes' or 'no'. Our experimental results closely match the Turkers' response data, demonstrating that meanings can be learned from Web data and that such meanings can drive pragmatic inference.