Passage-level evidence in document retrieval
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
Effective ranking with arbitrary passages
Journal of the American Society for Information Science and Technology
An Adapted Lesk Algorithm for Word Sense Disambiguation Using WordNet
CICLing '02 Proceedings of the Third International Conference on Computational Linguistics and Intelligent Text Processing
An Exploratory Study into Deception Detection in Text-Based Computer-Mediated Communication
HICSS '03 Proceedings of the 36th Annual Hawaii International Conference on System Sciences (HICSS'03) - Track1 - Volume 1
Mining and summarizing customer reviews
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Web Data Mining: Exploring Hyperlinks, Contents, and Usage Data (Data-Centric Systems and Applications)
Recognizing contextual polarity in phrase-level sentiment analysis
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
WordNet: similarity - measuring the relatedness of concepts
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
Extended gloss overlaps as a measure of semantic relatedness
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Hi-index | 0.00 |
Opinion Detection is one of the most interesting and challenging work in the field of Information Retrieval. Lot of research work already exists in this area with some distinctive work. A review of the reveals that researchers have been working on different levels of granularity like documents, passages, sentences and words for the task of opinion detection. In this work we revise our previous approach that combines document level heuristics with a semantic similarity based method. We evaluate this semantic similarity approach on a huge data collection using three different setups involving both sentences and passages and then compare the performance of our approach with these different setups. For evaluation purposes, we are using TREC Blog 2006 collection (148 GB) with 50 topics of TREC Blog 2006 over baseline obtained through Terrier Information System Platform. Results show that our approach improves the baseline opinion MAP by 28.89%, 30.13% and 32.26% using setup one, two and three respectively.