WordNet: a lexical database for English
Communications of the ACM
Learning to Classify Text Using Support Vector Machines: Methods, Theory and Algorithms
Learning to Classify Text Using Support Vector Machines: Methods, Theory and Algorithms
Predicting the semantic orientation of adjectives
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
Effects of adjective orientation and gradability on sentence subjectivity
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
Computational Linguistics
InfoXtract: a customizable intermediate level information extraction engine
SEALTS '03 Proceedings of the HLT-NAACL 2003 workshop on Software engineering and architecture of language technology systems - Volume 8
EMNLP '03 Proceedings of the 2003 conference on Empirical methods in natural language processing
Journal of the American Society for Information Science and Technology
A sentimental education: sentiment analysis using subjectivity summarization based on minimum cuts
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Learning to identify emotions in text
Proceedings of the 2008 ACM symposium on Applied computing
Generalization methods for in-domain and cross-domain opinion holder extraction
EACL '12 Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics
Hi-index | 0.00 |
In this paper we consider the problem of building models that have high sentiment classification accuracy without the aid of a labeled dataset from the target domain. For that purpose, we present and evaluate a novel method based on level of abstraction of nouns. By comparing high-level features (e.g. level of affective words, level of abstraction of nouns) and low-level features (e.g. unigrams, bigrams), we show that, high-level features are better to learn subjective language across domains. Our experimental results present accuracy levels across domains of 71.2% using SVMs learning models.