Variations in relevance judgments and the measurement of retrieval effectiveness
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
Liberal relevance criteria of TREC -: counting on negligible documents?
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
Using graded relevance assessments in IR evaluation
Journal of the American Society for Information Science and Technology
TREC: Experiment and Evaluation in Information Retrieval (Digital Libraries and Electronic Publishing)
Eliciting better information need descriptions from users of information search systems
Information Processing and Management: an International Journal
Evaluating relevant in context: document retrieval with a twist
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Affective feedback: an investigation into the role of emotions in the information seeking process
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Binary and graded relevance in IR evaluations-Comparison of the effects on ranking of IR systems
Information Processing and Management: an International Journal
ECDL'06 Proceedings of the 10th European conference on Research and Advanced Technology for Digital Libraries
Developing a test collection for the evaluation of integrated search
ECIR'2010 Proceedings of the 32nd European conference on Advances in Information Retrieval
Proceedings of the 4th Information Interaction in Context Symposium
Hi-index | 0.00 |
In this poster we investigate the associations between perceived ease of assessment of situational relevance made by a four-point scale, perceived satisfaction with retrieval results and the actual relevance assessments and retrieval performance made by test collection assessors based on their own genuine information tasks. Ease of assessment and search satisfaction are cross tabulated with retrieval performance measured by Normalized Discounted Cumulated Gain. Results show that when assessors find small numbers of relevant documents they tend to regard the search results with dissatisfaction and, in addition, they obtain lower performance for all document types involved, except for monographic records.