A re-examination of text categorization methods
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
Predicting the semantic orientation of adjectives
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
The kappa statistic: a second look
Computational Linguistics
Fast methods for kernel-based text analysis
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Identifying sources of opinions with conditional random fields and extraction patterns
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Joint extraction of entities and relations for opinion recognition
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Identifying expressions of opinion in context
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Extracting opinions, opinion holders, and topics expressed in online news media text
SST '06 Proceedings of the Workshop on Sentiment and Subjectivity in Text
Automatically generating extraction patterns from untagged text
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 2
Evaluating information extraction
CLEF'10 Proceedings of the 2010 international conference on Multilingual and multimodal information access evaluation: cross-language evaluation forum
Sense-level subjectivity in a multilingual setting
Computer Speech and Language
Hi-index | 0.00 |
In this paper we tackle an opinion extraction (OE) task, i.e., identifying in a text each expression of subjectivity, the subject expressing it, and its possible target.We especially focus on how lexical resources specifically developed for opinion mining could be used to improve the performance of an opinion extraction system. We report results, complete with statistical significance tests and inter-annotator agreement data, on two manually annotated corpora, one of English and one of Italian texts. We evaluate our results using standard evaluation measures and also using a new evaluation measure we have recently proposed.