Topic-dependent sentiment analysis of financial blogs
Proceedings of the 1st international CIKM workshop on Topic-sentiment analysis for mass opinion
EKAW'10 Proceedings of the 17th international conference on Knowledge engineering and management by the masses
Sentence-level contextual opinion retrieval
Proceedings of the 20th international conference companion on World wide web
Sentiment analysis amidst ambiguities in youtube comments on yoruba language (nollywood) movies
Proceedings of the 21st international conference companion on World Wide Web
Ranking User Influence in Healthcare Social Media
ACM Transactions on Intelligent Systems and Technology (TIST)
Towards a model of formal and informal address in English
EACL '12 Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics
Better than their reputation? on the reliability of relevance assessments with students
CLEF'12 Proceedings of the Third international conference on Information Access Evaluation: multilinguality, multimodality, and visual analytics
Vertical selection in the information domain of children
Proceedings of the 13th ACM/IEEE-CS joint conference on Digital libraries
Hi-index | 0.00 |
Evaluation of sentiment analysis, like large-scale IR evaluation, relies on the accuracy of human assessors to create judgments. Subjectivity in judgments is a problem for relevance assessment and even more so in the case of sentiment annotations. In this study we examine the degree to which assessors agree upon sentence-level sentiment annotation. We show that inter-assessor agreement is not contingent on document length or frequency of sentiment but correlates positively with automated opinion retrieval performance. We also examine the individual annotation categories to determine which categories pose most difficulty for annotators.