Procedure for quantitatively comparing the syntactic coverage of English grammars
HLT '91 Proceedings of the workshop on Speech and Natural Language
Building a large annotated corpus of English: the penn treebank
Computational Linguistics - Special issue on using large corpora: II
Assigning function tags to parsed text
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
A DOP model for semantic interpretation
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
Accurate unlexicalized parsing
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Head-Driven Statistical Models for Natural Language Parsing
Computational Linguistics
Coarse-to-fine n-best parsing and MaxEnt discriminative reranking
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Non-projective dependency parsing using spanning tree algorithms
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
CCGbank: A Corpus of CCG Derivations and Dependency Structures Extracted from the Penn Treebank
Computational Linguistics
Wide-coverage efficient statistical parsing with ccg and log-linear models
Computational Linguistics
Coling 2008: Proceedings of the workshop on Cross-Framework and Cross-Domain Parser Evaluation
CrossParser '08 Coling 2008: Proceedings of the workshop on Cross-Framework and Cross-Domain Parser Evaluation
Unbounded dependency recovery for parser evaluation
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 - Volume 2
Cambridge: Parser evaluation using textual entailment by grammatical relation comparison
SemEval '10 Proceedings of the 5th International Workshop on Semantic Evaluation
MARS: A specialized RTE system for parser evaluation
SemEval '10 Proceedings of the 5th International Workshop on Semantic Evaluation
SemEval '10 Proceedings of the 5th International Workshop on Semantic Evaluation
SCHWA: PETE using CCG dependencies with the C&C parser
SemEval '10 Proceedings of the 5th International Workshop on Semantic Evaluation
Answer validation using textual entailment
CICLing'11 Proceedings of the 12th international conference on Computational linguistics and intelligent text processing - Volume Part II
A statistics-based semantic textual entailment system
MICAI'11 Proceedings of the 10th Mexican international conference on Advances in Artificial Intelligence - Volume Part I
Evaluating language understanding accuracy with respect to objective outcomes in a dialogue system
EACL '12 Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics
JU_CSE_NLP: language independent cross-lingual textual entailment system
SemEval '12 Proceedings of the First Joint Conference on Lexical and Computational Semantics - Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
Linguistically-enriched models for Bulgarian-to-English machine translation
SSST-6 '12 Proceedings of the Sixth Workshop on Syntax, Semantics and Structure in Statistical Translation
Recognizing textual entailment in non-english text via automatic translation into english
MICAI'12 Proceedings of the 11th Mexican international conference on Advances in Computational Intelligence - Volume Part II
An inference-based model of word meaning in context as a paraphrase distribution
ACM Transactions on Intelligent Systems and Technology (TIST) - Special Sections on Paraphrasing; Intelligent Systems for Socially Aware Computing; Social Computing, Behavioral-Cultural Modeling, and Prediction
Hi-index | 0.00 |
Parser Evaluation using Textual Entailments (PETE) is a shared task in the SemEval-2010 Evaluation Exercises on Semantic Evaluation. The task involves recognizing textual entailments based on syntactic information alone. PETE introduces a new parser evaluation scheme that is formalism independent, less prone to annotation error, and focused on semantically relevant distinctions.