A critical investigation of recall and precision as measures of retrieval system performance
ACM Transactions on Information Systems (TOIS)
Formal methods for evaluating information retrieval in hypertext systems
SIGDOC '93 Proceedings of the 11th annual international conference on Systems documentation
Time, relevance and interaction modelling for information retrieval
Proceedings of the 20th annual international ACM SIGIR conference on Research and development in information retrieval
Information Retrieval
Modern Information Retrieval
Text Information Retrieval Systems
Text Information Retrieval Systems
Using graded relevance assessments in IR evaluation
Journal of the American Society for Information Science and Technology
The overlap problem in content-oriented XML retrieval evaluation
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
Structural relevance: a common basis for the evaluation of structured document retrieval
Proceedings of the 17th ACM conference on Information and knowledge management
Hi-index | 0.00 |
Standard Information Retrieval (IR) metrics are not well suited for new paradigms like XML or Web IR in which retrievable information units are document elements and/or sets of related documents. Part of the problem stems from the classical hypotheses on the user models: They do not take into account the structural or logical context of document elements or the possibility of navigation between units. This article proposes an explicit and formal user model that encompasses a large variety of user behaviors. Based on this model, we extend the probabilistic precision-recall metric to deal with the new IR paradigms.