Finding information on the World Wide Web: the retrieval effectiveness of search engines
Information Processing and Management: an International Journal
Results and challenges in Web search evaluation
WWW '99 Proceedings of the eighth international conference on World Wide Web
First 20 precision among World Wide Web search services (search engines)
Journal of the American Society for Information Science
A case study in web search using TREC algorithms
Proceedings of the 10th international conference on World Wide Web
Using manually-built web directories for automatic evaluation of known-item retrieval
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
Automatic ranking of retrieval systems in imperfect environments
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
Using titles and category names from editor-driven taxonomies for automatic evaluation
CIKM '03 Proceedings of the twelfth international conference on Information and knowledge management
Automatic performance evaluation of web search engines
Information Processing and Management: an International Journal
The effectiveness of web search engines for retrieving relevant ecommerce links
Information Processing and Management: an International Journal
Methods for comparing rankings of search engine results
Computer Networks: The International Journal of Computer and Telecommunications Networking - Web dynamics
Repeatable evaluation of search services in dynamic environments
ACM Transactions on Information Systems (TOIS)
Information Processing and Management: an International Journal
Mining world knowledge for analysis of search engine content
Web Intelligence and Agent Systems
Motivation for using search engines: A two-factor model
Journal of the American Society for Information Science and Technology
Comparative analysis of clicks and judgments for IR evaluation
Proceedings of the 2009 workshop on Web Search Click Data
Predicting user interests from contextual information
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Mining Historic Query Trails to Label Long and Rare Search Engine Queries
ACM Transactions on the Web (TWEB)
Web search solved?: all result rankings the same?
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
An overview of Web search evaluation methods
Computers and Electrical Engineering
Evaluation of the NSDL and google for obtaining pedagogical resources
ECDL'05 Proceedings of the 9th European conference on Research and Advanced Technology for Digital Libraries
Hi-index | 0.00 |
Users of the World-Wide Web are not only confronted by an immense overabundance of information, but also by a plethora of tools for searching for the web pages that suit their information needs. Web search engines differ widely in interface, features, coverage of the web, ranking methods, delivery of advertising, and more. In this paper, we present a method for comparing search engines automatically based on how they rank known item search results. Because the engines perform their search on overlapping (but different) subsets of the web collected at different points in time, evaluation of search engines poses significant challenges to the traditional information retrieval methodology. Our method uses known item searching; comparing the relative ranks of the items in the search engines' rankings. Our approach automatically constructs known item queries using query log analysis and automatically constructs the result via analysis of editor comments from the ODP (Open Directory Project). Additionally, we present our comparison on five (Lycos, Netscape, Fast, Google, HotBot) well-known search services and find that some services perform known item searches better than others, but the majority are statistically equivalent.