Incorporating non-local information into information extraction systems by Gibbs sampling
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Generating an entailment corpus from news headlines
EMSEE '05 Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment
Creating speech and language data with Amazon's Mechanical Turk
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
They can help: using crowdsourcing to improve the evaluation of grammatical error detection systems
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers - Volume 2
Divide and conquer: crowdsourcing the creation of cross-lingual textual entailment corpora
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Crowdsourcing inference-rule evaluation
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers - Volume 2
Bucking the trend: improved evaluation and annotation practices for ESL error detection systems
Language Resources and Evaluation
Hi-index | 0.00 |
This paper describes our experiments of using Amazon's Mechanical Turk to generate (counter-)facts from texts for certain named-entities. We give the human annotators a paragraph of text and a highlighted named-entity. They will write down several (counter-)facts about this named-entity in that context. The analysis of the results is performed by comparing the acquired data with the recognizing textual entailment (RTE) challenge dataset.