An evaluation of retrieval effectiveness for a full-text document-retrieval system
Communications of the ACM
Measuring usability: preference vs. performance
Communications of the ACM
Evaluation of evaluation in information retrieval
SIGIR '95 Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval
The effect of computer experience on subjective and objective software usability measures
CHI '95 Conference Companion on Human Factors in Computing Systems
The Cranfield tests on index language devices
Readings in information retrieval
Are computers scapegoats?: attributions of responsibility in human-computer interaction
International Journal of Human-Computer Studies
Measuring usability: are effectiveness, efficiency, and satisfaction really correlated?
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
WiIRE: the web interactive information retrieval experimentation system prototype
Information Processing and Management: an International Journal
TREC: Experiment and Evaluation in Information Retrieval (Digital Libraries and Electronic Publishing)
IIiX Proceedings of the 1st international conference on Information interaction in context
Evaluation by comparing result sets in context
CIKM '06 Proceedings of the 15th ACM international conference on Information and knowledge management
Meta-analysis of correlations among usability measures
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Questionnaire mode effects in interactive information retrieval experiments
Information Processing and Management: an International Journal
Methods for Evaluating Interactive Information Retrieval Systems with Users
Foundations and Trends in Information Retrieval
ACM Transactions on Information Systems (TOIS)
The impact of task complexity on people's mental models of MedlinePlus
Information Processing and Management: an International Journal
Hi-index | 0.00 |
In this study, we seek to understand how providing feedback to users about their performances with an interactive information retrieval (IIR) system impacts their evaluations of that system. Sixty subjects completed three recall-based searching tasks with an experimental IIR system and were asked to evaluate the system after each task and after finishing all three tasks. Before completing the final evaluation, three-fourths of the subjects were provided with feedback about their performances. Subjects were assigned randomly to one of four feedback conditions: a baseline condition where no feedback was provided; an actual feedback condition where subjects were provided with their real performances; and two conditions where subjects were deceived and told that they performed very well or very poorly. Results show that the type of feedback provided significantly affected subjects' system evaluations; most importantly there was a significant difference in subjects' satisfaction ratings before and after feedback was provided in the actual feedback condition.