Finding without seeking: the information encounter in the context of reading for pleasure
Information Processing and Management: an International Journal - Special issue on Information Seeking In Context (ISIC)
The Philosophy of Information Retrieval Evaluation
CLEF '01 Revised Papers from the Second Workshop of the Cross-Language Evaluation Forum on Evaluation of Cross-Language Information Retrieval Systems
Usage patterns of collaborative tagging systems
Journal of Information Science
Assessing aesthetic relevance: Children's book selection in a digital library
Journal of the American Society for Information Science and Technology
Bias and the limits of pooling for large collections
Information Retrieval
A new rank correlation coefficient for information retrieval
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Novelty and diversity in information retrieval evaluation
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Effects of Social Approval Votes on Search Performance
ITNG '09 Proceedings of the 2009 Sixth International Conference on Information Technology: New Generations
Crowdsourcing document relevance assessment with Mechanical Turk
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
A semantic similarity approach to predicting Library of Congress subject headings for social tags
Journal of the American Society for Information Science and Technology
Overview and results of the INEX 2009 interactive track
ECDL'10 Proceedings of the 14th European conference on Research and advanced technology for digital libraries
Journal of Information Science
Design and implementation of relevance assessments using crowdsourcing
ECIR'11 Proceedings of the 33rd European conference on Advances in information retrieval
In search of quality in crowdsourcing for search engine evaluation
ECIR'11 Proceedings of the 33rd European conference on Advances in information retrieval
Crowdsourcing for book search evaluation: impact of hit design on comparative system ranking
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Query representation for cross-temporal information retrieval
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Searching online book documents and analyzing book citations
Proceedings of the 2013 ACM symposium on Document engineering
Evaluating books finding tools on social media: A case study of aNobii
Information Processing and Management: an International Journal
Hi-index | 0.00 |
The Web and social media give us access to a wealth of information, not only different in quantity but also in character---traditional descriptions from professionals are now supplemented with user generated content. This challenges modern search systems based on the classical model of topical relevance and ad hoc search: How does their effectiveness transfer to the changing nature of information and to the changing types of information needs and search tasks? We use the INEX 2011 Books and Social Search Track's collection of book descriptions from Amazon and social cataloguing site LibraryThing. We compare classical IR with social book search in the context of the LibraryThing discussion forums where members ask for book suggestions. Specifically, we compare book suggestions on the forum with Mechanical Turk judgements on topical relevance and recommendation, both the judgements directly and their resulting evaluation of retrieval systems. First, the book suggestions on the forum are a complete enough set of relevance judgements for system evaluation. Second, topical relevance judgements result in a different system ranking from evaluation based on the forum suggestions. Although it is an important aspect for social book search, topical relevance is not sufficient for evaluation. Third, professional metadata alone is often not enough to determine the topical relevance of a book. User reviews provide a better signal for topical relevance. Fourth, user-generated content is more effective for social book search than professional metadata. Based on our findings, we propose an experimental evaluation that better reflects the complexities of social book search.