The syntactic process
Building a large annotated corpus of English: the penn treebank
Computational Linguistics - Special issue on using large corpora: II
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
Parsing the WSJ using CCG and log-linear models
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Recognising textual entailment with logical inference
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Linguistically motivated large-scale NLP with C&C and boxer
ACL '07 Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions
Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing
RTE '07 Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing
The PASCAL recognising textual entailment challenge
MLCW'05 Proceedings of the First international conference on Machine Learning Challenges: evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment
Hi-index | 0.00 |
Formal methods for the analysis of the meaning of natural language expressions have long been restricted to the ivory tower built by semanticists, logicians, and philosophers of language. It is only in exceptional cases that these methods make their way straight into open-domain natural language processing tools. Recently, however, this situation has changed. Thanks to (i) the development of treebanks, i.e., large collections of texts annotated with syntactic structures, (ii) robust statistical parsers trained on such treebanks, and (iii) the development of large-scale semantic lexica such as WordNet [1], VerbNet [2], PropBank [3], and FrameNet [4], we now have witnessed the development of wide-coverage systems that are able to produce formal semantic representations for open-domain texts.