A computational model of metaphor interpretation
A computational model of metaphor interpretation
A sequential algorithm for training text classifiers
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
Moving right along: a computational model of metaphoric reasoning about events
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Similarity-based word sense disambiguation
Computational Linguistics - Special issue on word sense disambiguation
Selective sampling for example-based word sense disambiguation
Computational Linguistics
Supertagging: an approach to almost parsing
Computational Linguistics
Minimizing manual annotation cost in supervised training from corpora
ACL '96 Proceedings of the 34th annual meeting on Association for Computational Linguistics
CorMet: a computational, corpus-based conventional metaphor extraction system
Computational Linguistics
The kappa statistic: a second look
Computational Linguistics
Syntactic features and word similarity for supervised metonymy resolution
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Clues for detecting irony in user-generated contents: oh...!! it's "so easy" ;-)
Proceedings of the 1st international CIKM workshop on Topic-sentiment analysis for mass opinion
Literal and metaphorical sense identification through concrete and abstract context
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Hi-index | 0.00 |
In this paper we present an active learning approach used to create an annotated corpus of literal and nonliteral usages of verbs. The model uses nearly unsupervised word-sense disambiguation and clustering techniques. We report on experiments in which a human expert is asked to correct system predictions in different stages of learning: (i) after the last iteration when the clustering step has converged, or (ii) during each iteration of the clustering algorithm. The model obtains an f-score of 53.8% on a dataset in which literal/nonliteral usages of 25 verbs were annotated by human experts. In comparison, the same model augmented with active learning obtains 64.91%. We also measure the number of examples required when model confidence is used to select examples for human correction as compared to random selection. The results of this active learning system have been compiled into a freely available annotated corpus of literal/nonliteral usage of verbs in context.