Information Processing and Management: an International Journal
Information Processing and Management: an International Journal
Individual differences in the use of CD ROM databases
Individual differences in the use of CD ROM databases
Markov models of search state patterns in a hypertext information retrieval system
Journal of the American Society for Information Science
User-defined relevance criteria: an exploratory study
Journal of the American Society for Information Science - Special issue: relevance research
Relevance and retrieval evaluation: perspectives from medicine
Journal of the American Society for Information Science - Special issue: relevance research
The relevance of recall and precision in user evaluation
Journal of the American Society for Information Science - Special issue: relevance research
Measuring retrieval effectiveness based on user preference of documents
Journal of the American Society for Information Science
Recommending and evaluating choices in a virtual community of use
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Social information filtering: algorithms for automating “word of mouth”
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
STAIRS redux: thoughts on the STAIRS evaluation, ten years after
Journal of the American Society for Information Science - Special issue: evaluation of information retrieval systems
Communications of the ACM
Information storage and retrieval
Information storage and retrieval
Journal of the American Society for Information Science - Special topic issue on the history of documentation and information science: part II
Browsing is a collaborative process
Information Processing and Management: an International Journal
Collaborative information retrieval: toward a social informatics view of IR interaction
Journal of the American Society for Information Science
Users' criteria for relevance evaluation: a cross-situational comparison
Information Processing and Management: an International Journal
Variations in relevance judgments and the measurement of retrieval effectiveness
Information Processing and Management: an International Journal
Information Retrieval
Introduction to Modern Information Retrieval
Introduction to Modern Information Retrieval
User Models and Filtering Agents for Improved Internet Information Retrieval
User Modeling and User-Adapted Interaction
Overview of the second text retrieval conference (TREC-2)
HLT '94 Proceedings of the workshop on Human Language Technology
The text retrieval conferences (TRECS)
TIPSTER '98 Proceedings of a workshop on held at Baltimore, Maryland: October 13-15, 1998
Work tasks and socio-cognitive relevance: A specific example
Journal of the American Society for Information Science and Technology
Letter to the editor: quality of information
Journal of the American Society for Information Science and Technology
Letter to the editor: rejoinder: quality of information
Journal of the American Society for Information Science and Technology
New measurements for search engine evaluation proposed and tested
Information Processing and Management: an International Journal
Use of collaborative recommendations for web search: an exploratory user study
Journal of Information Science
Methods for Evaluating Interactive Information Retrieval Systems with Users
Foundations and Trends in Information Retrieval
Proceedings of the 16th ACM international conference on Supporting group work
Hi-index | 0.00 |
Relevance judgment has traditionally been considered a personal and subjective matter. A user's search and the search result are treated as an isolated event. To consider the collaborative nature of information retrieval (IR) in a group/organization or even societal context, this article proposes a method that measures relevance based on group/peer consensus. The method can be used in IR experiments. In this method, the relevance of a document is decided by group consensus, or more specifically, by the number of users (or experiment participants) who retrieve it for the same search question. The more users who retrieve it, the more relevant the document will be considered. A user's search performance can be measured by a relevance score based on this notion. The article reports the results of an experiment using this method to compare the search performance of different types of users. Related issues with the method and future directions are also discussed.