Automatic text processing: the transformation, analysis, and retrieval of information by computer
Automatic text processing: the transformation, analysis, and retrieval of information by computer
An analysis of Internet search engines: assessment of over 200 search queries
Computers in Libraries
The anatomy of a large-scale hypertextual Web search engine
WWW7 Proceedings of the seventh international conference on World Wide Web 7
Results and challenges in Web search evaluation
WWW '99 Proceedings of the eighth international conference on World Wide Web
First 20 precision among World Wide Web search services (search engines)
Journal of the American Society for Information Science
Relevance ranking for one to three term queries
Information Processing and Management: an International Journal
A new statistical method for performance evaluation of search engines
ICTAI '00 Proceedings of the 12th IEEE International Conference on Tools with Artificial Intelligence
Using titles and category names from editor-driven taxonomies for automatic evaluation
CIKM '03 Proceedings of the twelfth international conference on Information and knowledge management
A subjective measure of web search quality
Information Sciences—Informatics and Computer Science: An International Journal
The effectiveness of web search engines for retrieving relevant ecommerce links
Information Processing and Management: an International Journal
Information retrieval on the web: improving relevancy by disambiguating user queries
ACST'06 Proceedings of the 2nd IASTED international conference on Advances in computer science and technology
Repeatable evaluation of search services in dynamic environments
ACM Transactions on Information Systems (TOIS)
Search engines evaluation for P2P based digital libraries
Proceedings of the 2008 Euro American Conference on Telematics and Information Systems
Web search solved?: all result rankings the same?
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
An overview of Web search evaluation methods
Computers and Electrical Engineering
Hi-index | 0.00 |
In this paper, we present a general approach for statistically evaluating precision of search engines on the Web. Search engines are evaluated in two steps based on a large number of sample queries: (a) computing relevance scores of hits from each search engine, and (b) ranking the search engines based on statistical comparison of the relevance scores. In computing relevance scores of hits, we study four relevance scoring algorithms. Three of them are variations of algorithms widely used in the traditional information retrieval field. They are cover density ranking, Okapi similarity measurement, and vector space model algorithms. In addition, we develop a new three-level scoring algorithm to mimic commonly used manual approaches. In ranking the search engines in terms of precision, we apply a statistical metric called probability of win. In our experiments, six popular search engines, AltaVista, Fast, Google, Go, iWon, and NorthernLight, were evaluated based on queries from two domains of interest: parallel and distributed processing, and knowledge and data engineering. The first query set contains 1726 queries collected from the index terms of papers published in the IEEE Transactions on Knowledge and Data Engineering. The second set contains 1383 queries collected from the index terms of papers published in the IEEE Transactions on Parallel and Distributed Systems. Search engines were queried and compared in two different search modes: the default search mode and the exact phrase search mode. Our experimental results show that these six search engines performed differently under different search modes and scoring methods. Overall, Google was the best. NorthernLight was mostly second in the default search mode, whereas iWon was mostly second in the exact phrase search mode.