An algorithm for suffix stripping
Readings in information retrieval
Predicting the semantic orientation of adjectives
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
Effects of adjective orientation and gradability on sentence subjectivity
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
Mining and summarizing customer reviews
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Opinion observer: analyzing and comparing opinions on the Web
WWW '05 Proceedings of the 14th international conference on World Wide Web
Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Determining the semantic orientation of terms through gloss classification
Proceedings of the 14th ACM international conference on Information and knowledge management
Using appraisal groups for sentiment analysis
Proceedings of the 14th ACM international conference on Information and knowledge management
Enriching the knowledge sources used in a maximum entropy part-of-speech tagger
EMNLP '00 Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics - Volume 13
Thumbs up?: sentiment classification using machine learning techniques
EMNLP '02 Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10
EMNLP '03 Proceedings of the 2003 conference on Empirical methods in natural language processing
A sentimental education: sentiment analysis using subjectivity summarization based on minimum cuts
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Extracting product features and opinions from reviews
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Recognizing contextual polarity in phrase-level sentiment analysis
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
ARSA: a sentiment-aware model for predicting sales performance using blogs
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
The utility of linguistic rules in opinion mining
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Ontology-supported polarity mining
Journal of the American Society for Information Science and Technology
Modeling online reviews with multi-grain topic models
Proceedings of the 17th international conference on World Wide Web
Opinion integration through semi-supervised topic modeling
Proceedings of the 17th international conference on World Wide Web
Rated aspect summarization of short comments
Proceedings of the 18th international conference on World wide web
Let's Tango --- Finding the Right Couple for Feature-Opinion Association in Sentiment Analysis
PAKDD '09 Proceedings of the 13th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining
Sentiment learning on product reviews via sentiment ontology tree
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Hi-index | 0.00 |
Sentiment classification on product reviews has become a popular topic in the research community. In this paper, we propose an approach to generating multi-unigram features to enhance a negation-aware Naive Bayes classifier for sentiment classification on sentences of product reviews. We coin the term "multi-unigram feature" to represent a new kind of features that are generated in our proposed algorithm with capturing high-frequently co-appeared unigram features in the training data. We further make the classifier aware of negation expressions in the training and classification process to eliminate the confusions of the classifier that is caused by negation expressions within sentences. Extensive experiments on a human-labeled data set not only qualitatively demonstrate good quality of the generated multi-unigram features but also quantitatively show that our proposed approach beats three baseline methods. Experiments on impact analysis of parameters illustrate that our proposed approach stably outperforms the baseline methods.