Automated discourse generation using discourse structure relations
Artificial Intelligence - Special volume on natural language processing
Participating in explanatory dialogues: interpreting and responding to questions in context
Participating in explanatory dialogues: interpreting and responding to questions in context
Computational Linguistics
Discourse level opinion interpretation
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
Annotating attributions and private states
CorpusAnno '05 Proceedings of the Workshop on Frontiers in Corpus Annotations II: Pie in the Sky
Levels of certainty in knowledge-intensive corpora: an initial annotation study
NeSp-NLP '10 Proceedings of the Workshop on Negation and Speculation in Natural Language Processing
Automatic committed belief tagging
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics: Posters
Subjectivity and sentiment annotation of modern standard Arabic newswire
LAW V '11 Proceedings of the 5th Linguistic Annotation Workshop
Modality and negation: An introduction to the special issue
Computational Linguistics
Are you sure that this happened? assessing the factuality degree of events in text
Computational Linguistics
Did it happen? the pragmatic complexity of veridicality assessment
Computational Linguistics
Modality and negation in simt use of modality and negation in semantically-informed syntactic mt
Computational Linguistics
ExProM '12 Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics
Statistical modality tagging from rule-based annotations and crowdsourcing
ExProM '12 Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics
Hi-index | 0.00 |
We present a preliminary pilot study of belief annotation and automatic tagging. Our objective is to explore semantic meaning beyond surface propositions. We aim to model people's cognitive states, namely their beliefs as expressed through linguistic means. We model the strength of their beliefs and their (the human) degree of commitment to their utterance. We explore only the perspective of the author of a text. We classify predicates into one of three possibilities: committed belief, non committed belief, or not applicable. We proceed to manually annotate data to that end, then we build a supervised framework to test the feasibility of automatically predicting these belief states. Even though the data is relatively small, we show that automatic prediction of a belief class is a feasible task. Using syntactic features, we are able to obtain significant improvements over a simple baseline of 23% F-measure absolute points. The best performing automatic tagging condition is where we use POS tag, word type feature AlphaNumeric, and shallow syntactic chunk information CHUNK. Our best overall performance is 53.97% F-measure.