An evaluation of retrieval effectiveness for a full-text document-retrieval system
Communications of the ACM
Automatic text processing: the transformation, analysis, and retrieval of information by computer
Automatic text processing: the transformation, analysis, and retrieval of information by computer
Wizard of Oz studies: why and how
IUI '93 Proceedings of the 1st international conference on Intelligent user interfaces
A graphical query interface based on aggregation/generalization hierarchies
Information Systems
Passage-level evidence in document retrieval
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
A case for interaction: a study of interactive information retrieval behavior and effectiveness
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the 20th annual international ACM SIGIR conference on Research and development in information retrieval
Searcher performance in question answering
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Retrieval evaluation with incomplete information
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
User-Oriented Relevance Judgment: A Conceptual Model
HICSS '05 Proceedings of the Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS'05) - Track 4 - Volume 04
User performance versus precision measures for simple search tasks
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Exploring the limits of single-iteration clarification dialogs
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Fact-focused novelty detection: a feasibility study
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
User simulations for evaluating answers to question series
Information Processing and Management: an International Journal
How well does result relevance predict session satisfaction?
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Using provenance to aid in personal file search
ATC'07 2007 USENIX Annual Technical Conference on Proceedings of the USENIX Annual Technical Conference
How do users find things with PubMed?: towards automatic utility evaluation with user simulations
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
The good and the bad system: does the test collection predict users' effectiveness?
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
User adaptation: good results from poor systems
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Relevance thresholds in system evaluations
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Experiences evaluating personal metasearch
Proceedings of the second international symposium on Information interaction in context
Rank-biased precision for measurement of retrieval effectiveness
ACM Transactions on Information Systems (TOIS)
Toward automatic facet analysis and need negotiation: Lessons from mediated search
ACM Transactions on Information Systems (TOIS)
Multiple coordinated views for searching and navigating Web content repositories
Information Sciences: an International Journal
Including summaries in system evaluation
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Evaluating web search using task completion time
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
An Analysis of NP-Completeness in Novelty and Diversity Ranking
ICTIR '09 Proceedings of the 2nd International Conference on Theory of Information Retrieval: Advances in Information Retrieval Theory
Explaining User Performance in Information Retrieval: Challenges to IR Evaluation
ICTIR '09 Proceedings of the 2nd International Conference on Theory of Information Retrieval: Advances in Information Retrieval Theory
Probabilistic models of ranking novel documents for faceted topic retrieval
Proceedings of the 18th ACM conference on Information and knowledge management
Metric and Relevance Mismatch in Retrieval Evaluation
AIRS '09 Proceedings of the 5th Asia Information Retrieval Symposium on Information Retrieval Technology
ACM Transactions on Information Systems (TOIS)
Using clicks as implicit judgments: expectations versus observations
ECIR'08 Proceedings of the IR research, 30th European conference on Advances in information retrieval
Tightly coupled views for navigating content repositories
Companion Proceedings of the XIV Brazilian Symposium on Multimedia and the Web
A review of factors influencing user satisfaction in information retrieval
Journal of the American Society for Information Science and Technology
Do user preferences and evaluation measures line up?
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Human performance and retrieval precision revisited
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Comparing the sensitivity of information retrieval metrics
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Evaluating search systems using result page context
Proceedings of the third symposium on Information interaction in context
Web search solved?: all result rankings the same?
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
A comparison of user and system query performance predictions
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
Evaluating search engines by clickthrough data
ISWC'10 Proceedings of the 9th international semantic web conference on The semantic web - Volume Part II
An analysis of NP-completeness in novelty and diversity ranking
Information Retrieval
The effect of user characteristics on search effectiveness in information retrieval
Information Processing and Management: an International Journal
Click the search button and be happy: evaluating direct and immediate information access
Proceedings of the 20th ACM international conference on Information and knowledge management
IR research: systems, interaction, evaluation and theories
ACM SIGIR Forum
The case of the duplicate documents measurement, search, and science
APWeb'06 Proceedings of the 8th Asia-Pacific Web conference on Frontiers of WWW Research and Development
Using preference judgments for novel document retrieval
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Amount of invested mental effort (AIME) in online searching
Information Processing and Management: an International Journal
Metaphor: a system for related search recommendations
Proceedings of the 21st ACM international conference on Information and knowledge management
Summaries, ranked retrieval and sessions: a unified framework for information access evaluation
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Choices in batch information retrieval evaluation
Proceedings of the 18th Australasian Document Computing Symposium
Evaluation in Music Information Retrieval
Journal of Intelligent Information Systems
Hi-index | 0.00 |
We describe a user study that examined the relationship between the quality of an Information Retrieval system and the effectiveness of its users in performing a task. The task involves finding answer facets of questions pertaining to a collection of newswire documents over a six month period. We artificially created sets of ranked lists at increasing levels of quality by blending the output of a state-of-the-art retrieval system with truth data created by annotators. Subjects performed the task by using these ranked lists to guide their labeling of answer passages in the retrieved articles. We found that as system accuracy improves, subject time on task and error rate decrease, and the rate of finding new correct answers increases. There is a large intermediary region in which the utility difference is not significant; our results suggest that there is some threshold of accuracy for this task beyond which user utility improves rapidly, but more experiments are needed to examine the area around that threshold closely.