Why are online catalogs still hard to use?
Journal of the American Society for Information Science - Special issue: current research in online public access systems
The CSIRO enterprise search test collection
ACM SIGIR Forum
Overview of the INEX 2007 Entity Ranking Track
Focused Access to XML Documents
A language modeling framework for expert finding
Information Processing and Management: an International Journal
Modeling documents as mixtures of persons for expert finding
ECIR'08 Proceedings of the IR research, 30th European conference on Advances in information retrieval
BooksOnline'11: 4th workshop on online books, complementary social media, and crowdsourcing
Proceedings of the 20th ACM international conference on Information and knowledge management
Improving a hybrid literary book recommendation system through author ranking
Proceedings of the 12th ACM/IEEE-CS joint conference on Digital Libraries
Stylometric relevance-feedback towards a hybrid book recommendation algorithm
Proceedings of the fifth ACM workshop on Research advances in large digital book repositories and complementary media
Book recommender prototype based on author's writing style
Proceedings of the 10th Conference on Open Research Areas in Information Retrieval
Hi-index | 0.01 |
The field of information retrieval has witnessed over 50 years of research on retrieval methods for metadata descriptions and controlled indexing languages, the prototypical example being the library catalogue. It seems only natural to resort to additional data for improving book retrieval, such as the text of the book in whole or in part (table of contents, abstract) or contributed social data acquired through crowdsourcing social cataloguing sites like LibraryThing. Without denying the potential value of such additional data, we want to challenge the underlying assumption that applying novel retrieval methods to traditional book descriptions cannot improve book retrieval. Specifically, this paper investigates the effectiveness of author rankings in a library catalogue. We show that a standard retrieval model results in a book ranking that meets and exceeds the effectiveness of catalogue systems. We show that using expert finding methods we also can obtain effective author rankings that complement the traditional book rankings. Moreover, ranking books on author scores leads to substantial and significant improvements over the original book rankings. If we base our book ranking on the combination of the author scores and the book scores we see no further improvements. Hence our results clearly demonstrate the importance of author ranking for retrieving library catalogue records: authors capture an important aspect of relevance and one that is not obvious to those unfamiliar with specific area of interest.