Information Retrieval
Capturing term dependencies using a language model based on sentence trees
Proceedings of the eleventh international conference on Information and knowledge management
ICDM '03 Proceedings of the Third IEEE International Conference on Data Mining
Mining and summarizing customer reviews
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Thumbs up?: sentiment classification using machine learning techniques
EMNLP '02 Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10
A sentimental education: sentiment analysis using subjectivity summarization based on minimum cuts
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Extracting product features and opinions from reviews
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Experiments on summary-based opinion classification
CAAGET '10 Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text
Hi-index | 0.00 |
Sentiment classification is used to identify whether the opinion expressed in a document is positive or negative. In this paper, we present an evaluation modeling approach to document-level sentiment classification. The motivation of this work stems from the observation that the global document classification can benefit greatly by learning how a topical term is evaluated in its local sentence context. Two sentence-level sentiment evaluation models, namely positive and negative models, are constructed for each topical term. When analyzing a document, the evaluation models generate divergence to support sentence classification that in turn can be used to decide on the whole document classification collectively. When evaluated on a public available movie review corpus, the experimental results are comparable with the ones published. This is quite encouraging to us and motivates us to further investigate how to develop more effective evaluation models in the future.