Evaluation measures for interactive information retrieval
Information Processing and Management: an International Journal - Special issue on evaluation issues in information retrieval
Journal of the American Society for Information Science - Special topic issue on the history of documentation and information science: part II
Information Processing and Management: an International Journal
From highly relevant to not relevant: examining different regions of relevance
Information Processing and Management: an International Journal
Relevance and contributing information types of searched documents in task performance
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Liberal relevance criteria of TREC -: counting on negligible documents?
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
Minimal test collections for retrieval evaluation
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Considerations of Efficiency and Mental Stress of Search Tasks on Websites by Blind Persons
UAHCI '09 Proceedings of the 5th International Conference on Universal Access in Human-Computer Interaction. Part III: Applications and Services
Distribution of cognitive load in Web search
Journal of the American Society for Information Science and Technology
System effectiveness, user models, and user utility: a conceptual framework for investigation
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Measuring assessor accuracy: a comparison of nist assessors and user study participants
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
User relevance criteria choices and the information search process
Information Processing and Management: an International Journal
Hi-index | 0.00 |
The judging of relevance has been a subject of study in information retrieval for a long time, especially in the creation of relevance judgments for test collections. While the criteria by which assessors? judge relevance has been intensively studied, little work has investigated the process individual assessors go through to judge the relevance of a document. In this paper, we focus on the process by which relevance is judged, and in particular, the degree of effort a user must expend to judge relevance. By better understanding this effort in isolation, we may provide data which can be used to create better models of search. We present the results of an empirical evaluation of the effort users must exert to judge the relevance of document, investigating the effect of relevance level and document size. Results suggest that 'relevant' documents require more effort to judge when compared to highly relevant and not relevant documents, and that effort increases as document size increases.