Use of support vector learning for chunk identification
ConLL '00 Proceedings of the 2nd workshop on Learning language in logic and the 4th conference on Computational natural language learning - Volume 7
Determining Modality and Factuality for Text Entailment
ICSC '07 Proceedings of the International Conference on Semantic Computing
A factuality profiler for eventualities in text
A factuality profiler for eventualities in text
Multi-dimensional classification of biomedical text
Bioinformatics
BioNLP '08 Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing
Overview of BioNLP'09 shared task on event extraction
BioNLP '09 Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing: Shared Task
Learning the scope of hedge cues in biomedical texts
BioNLP '09 Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing
A metalearning approach to processing the scope of negation
CoNLL '09 Proceedings of the Thirteenth Conference on Computational Natural Language Learning
NAACL-Short '07 Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers
A discourse commitment-based framework for recognizing textual entailment
RTE '07 Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing
Committed belief annotation and tagging
ACL-IJCNLP '09 Proceedings of the Third Linguistic Annotation Workshop
Detecting speculations and their scopes in scientific text
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3 - Volume 3
Information Processing and Management: an International Journal
Creating and evaluating a consensus for negated and speculative words in a Swedish clinical corpus
NeSp-NLP '10 Proceedings of the Workshop on Negation and Speculation in Natural Language Processing
Levels of certainty in knowledge-intensive corpora: an initial annotation study
NeSp-NLP '10 Proceedings of the Workshop on Negation and Speculation in Natural Language Processing
Evaluating a meta-knowledge annotation scheme for bio-events
NeSp-NLP '10 Proceedings of the Workshop on Negation and Speculation in Natural Language Processing
The CoNLL-2010 shared task: learning to detect hedges and their scope in natural language text
CoNLL '10: Shared Task Proceedings of the Fourteenth Conference on Computational Natural Language Learning --- Shared Task
Resolving speculation: MaxEnt cue classification and dependency-based scope rules
CoNLL '10: Shared Task Proceedings of the Fourteenth Conference on Computational Natural Language Learning --- Shared Task
Automatic committed belief tagging
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics: Posters
ExProM '12 Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics
Hi-index | 0.00 |
Identifying the veracity, or factuality, of event mentions in text is fundamental for reasoning about eventualities in discourse. Inferences derived from events judged as not having happened, or as being only possible, are different from those derived from events evaluated as factual. Event factuality involves two separate levels of information. On the one hand, it deals with polarity, which distinguishes between positive and negative instantiations of events. On the other, it has to do with degrees of certainty (e.g., possible, probable), an information level generally subsumed under the category of epistemic modality. This article aims at contributing to a better understanding of how event factuality is articulated in natural language. For that purpose, we put forward a linguistic-oriented computational model which has at its core an algorithm articulating the effect of factuality relations across levels of syntactic embedding. As a proof of concept, this model has been implemented in De Facto, a factuality profiler for eventualities mentioned in text, and tested against a corpus built specifically for the task, yielding an F1 of 0.70 (macro-averaging) and 0.80 (micro-averaging). These two measures mutually compensate for an over-emphasis present in the other (either on the lesser or greater populated categories), and can therefore be interpreted as the lower and upper bounds of the De Facto's performance.