WordNet: a lexical database for English
Communications of the ACM
Building Large Knowledge-Based Systems; Representation and Inference in the Cyc Project
Building Large Knowledge-Based Systems; Representation and Inference in the Cyc Project
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
Collecting paraphrase corpora from volunteer contributors
Proceedings of the 3rd international conference on Knowledge capture
Boeing's NLP system and the challenges of semantic representation
STEP '08 Proceedings of the 2008 Conference on Semantics in Text Processing
Textual entailment through extended lexical overlap and lexico-semantic matching
RTE '07 Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing
Learning alignments and leveraging natural logic
RTE '07 Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing
Augmenting WordNet-based inference with argument mapping
TextInfer '09 Proceedings of the 2009 Workshop on Applied Textual Inference
Open-domain commonsense reasoning using discourse relations from a corpus of weblog stories
FAM-LbR '10 Proceedings of the NAACL HLT 2010 First International Workshop on Formalisms and Methodology for Learning by Reading
On the automatic generation of intermediate logic forms for wordnet glosses
CICLing'10 Proceedings of the 11th international conference on Computational Linguistics and Intelligent Text Processing
Large, huge or gigantic? Identifying and encoding intensity relations among adjectives in WordNet
Language Resources and Evaluation
Hi-index | 0.00 |
One of the big challenges in understanding text, i.e., constructing an overall coherent representation of the text, is that much information needed in that representation is unstated (implicit). Thus, in order to "fill in the gaps" and create an overall representation, language processing systems need a large amount of world knowledge, and creating those knowledge resources remains a fundamental challenge. In our current work, we are seeking to augment WordNet as a knowledge resource for language understanding in several ways: adding in formal versions of its word sense definitions (glosses); classifying the morphosemantic links between nouns and verbs; encoding a small number of "core theories" about WordNet's most commonly used terms; and adding in simple representations of scripts. Although this is still work in progress, we describe our experiences so far with what we hope will be a significantly improved resource for the deep understanding of language.