Formative design evaluation of superbook
ACM Transactions on Information Systems (TOIS)
Information Processing and Management: an International Journal
An investigation to find appropriate measures for evaluating interactive information retrieval
An investigation to find appropriate measures for evaluating interactive information retrieval
The relevance of recall and precision in user evaluation
Journal of the American Society for Information Science - Special issue: relevance research
Finding information on the World Wide Web: the retrieval effectiveness of search engines
Information Processing and Management: an International Journal
Results and challenges in Web search evaluation
WWW '99 Proceedings of the eighth international conference on World Wide Web
First 20 precision among World Wide Web search services (search engines)
Journal of the American Society for Information Science
An empirical testing of user stereotypes of information retrieval systems
Information Processing and Management: an International Journal - Special issue: Cross-language information retrieval
The effectiveness of web search engines for retrieving relevant ecommerce links
Information Processing and Management: an International Journal
Methods for comparing rankings of search engine results
Computer Networks: The International Journal of Computer and Telecommunications Networking - Web dynamics
Information Processing and Management: an International Journal
User rankings of search engine results
Journal of the American Society for Information Science and Technology
User-Oriented Evaluation Methods for Interactive Web Search Interfaces
WI-IATW '07 Proceedings of the 2007 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Workshops
Motivation for using search engines: A two-factor model
Journal of the American Society for Information Science and Technology
Contextual multi-dimensional browsing
Computers in Human Behavior
Ranked-Listed or Categorized Results in IR: 2 Is Better Than 1
NLDB '08 Proceedings of the 13th international conference on Natural Language and Information Systems: Applications of Natural Language to Information Systems
Testing a decision-theoretic approach to the evaluation of information retrieval systems
Journal of Information Science
Using the taxonomy of cognitive learning to model online searching
Information Processing and Management: an International Journal
Adaptivity in question answering with user modelling and a dialogue interface
EACL '06 Proceedings of the Eleventh Conference of the European Chapter of the Association for Computational Linguistics: Posters & Demonstrations
Incorporating user models in question answering to improve readability
KRAQ '06 Proceedings of the Workshop KRAQ'06 on Knowledge and Reasoning for Language Processing
ACM Transactions on Information Systems (TOIS)
A review of factors influencing user satisfaction in information retrieval
Journal of the American Society for Information Science and Technology
An overview of Web search evaluation methods
Computers and Electrical Engineering
An empirical analysis of user evaluation factors on attitude and intention of using a search engine
International Journal of Business Information Systems
Hi-index | 0.01 |
This paper presents an application of the model described in Part I to the evaluation of Web search engines by undergraduates. The study observed how 36 undergraduate used four major search engines to find information for their own individual problems and how they evaluated these engines based on actual interaction with the search engines. User evaluation was based on 16 performance measures representing five evaluation criteria: relevance, efficiency, utility, user satisfaction, and connectivity. Non-performance (user-related) measures were also applied. Each participant searched his/ her own topic on all four engines and provided satisfaction ratings for system features and interaction and reasons for satisfaction. Each also made relevance judgements of retrieved items in relation to his/her own information need and participated in post-search interviews to provide reactions to the search results and overall performance. The study found significant differences in precision PR1, relative recall, user satisfaction with output display, time saving, value of search results, and overall performance among the four engines and also significant engine by discipline interactions on all these measures. In addition, the study found significant differences in user satisfaction with response time among four engines, and significant engine by discipline interaction in user satisfaction with search interface. None of the four search engines dominated in every aspect of the multidimensional evaluation. Content analysis of verbal data identified a number of user criteria and users evaluative comments based on these criteria. Results from both quantitative analysis and content analysis provide insight for system design and development, and useful feedback on strengths and weaknesses of search engines for system improvement.