Why batch and user evaluations do not give the same results
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Communications of the ACM - Supporting exploratory search
Evaluation campaigns and TRECVid
MIR '06 Proceedings of the 8th ACM international workshop on Multimedia information retrieval
A framework for multimedia content abstraction and its application to rushes exploration
Proceedings of the 6th ACM international conference on Image and video retrieval
Editorial: Evaluating exploratory search systems
Information Processing and Management: an International Journal
Model-driven formative evaluation of exploratory search: A study under a sensemaking framework
Information Processing and Management: an International Journal
Hi-index | 0.00 |
There are still no established methods for the evaluation of browsing and exploratory search tools. In the (multimedia) information retrieval community evaluations following the Cranfield paradigm (as e.g. used in TRECVID) have been widely adopted. We have applied two TRECVID style fact finding approaches (retrieval and question answering tasks) and a user survey to the evaluation of a video browsing tool. We analyze the correlation between the results of the different methods, whether different aspects can be evaluated independently with the survey, and if a learning effect can be measured with the different methods. The results show that the retrieval task correlates better with the user experience according to the survey than the question answering tasks. It turns out that the survey rather measures the general user experience while different aspects of the usability cannot be analyzed independently.