Committed belief annotation and tagging

  • Authors:
  • Mona T. Diab;Lori Levin;Teruko Mitamura;Owen Rambow;Vinodkumar Prabhakaran;Weiwei Guo

  • Affiliations:
  • Columbia U.;LTI, CMU;LTI, CMU;Columbia U.;Columbia U.;Columbia U.

  • Venue:
  • ACL-IJCNLP '09 Proceedings of the Third Linguistic Annotation Workshop
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a preliminary pilot study of belief annotation and automatic tagging. Our objective is to explore semantic meaning beyond surface propositions. We aim to model people's cognitive states, namely their beliefs as expressed through linguistic means. We model the strength of their beliefs and their (the human) degree of commitment to their utterance. We explore only the perspective of the author of a text. We classify predicates into one of three possibilities: committed belief, non committed belief, or not applicable. We proceed to manually annotate data to that end, then we build a supervised framework to test the feasibility of automatically predicting these belief states. Even though the data is relatively small, we show that automatic prediction of a belief class is a feasible task. Using syntactic features, we are able to obtain significant improvements over a simple baseline of 23% F-measure absolute points. The best performing automatic tagging condition is where we use POS tag, word type feature AlphaNumeric, and shallow syntactic chunk information CHUNK. Our best overall performance is 53.97% F-measure.