Modelling epistemic uncertainty in ir evaluation

  • Authors:
  • Murat Yakici;Mark Baillie;Ian Ruthven;Fabio Crestani

  • Affiliations:
  • University of Strathclyde, Glasgow, United Kingdom;University of Strathclyde, Glasgow, United Kingdom;University of Strathclyde, Glasgow, United Kingdom;Faculty of Informatics, Lugano, Switzerland

  • Venue:
  • SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Modern information retrieval (IR) test collections violate the completeness assumption of the Cranfield paradigm. In order to maximise the available resources, only a sample of documents (i.e. the pool) are judged for relevance by a human assessor(s). The subsequent evaluation protocol does not make any distinctions between assessed or unassesseddocuments, as documents that are not in the pool are assumedto be not relevant for the topic. This is beneficial from a practical point of view, as the relative performance can be compared with confidence if the experimental conditions are fair for all systems. However, given the incompleteness of relevance assessments, two forms of uncertainty emerge during evaluation. The first is Aleatory uncertainty, which refers to variation in system performance across the topic set, which is often addressed through the use of statistical significance tests. The second form of uncertainty is Epistemic, which refers to the amount of knowledge (or ignorance) we have about the estimate of a system's performance. Epistemic uncertainty is a consequence of incompleteness and is not addressed by the current evaluation protocol. In this study, we present a first attempt at modelling both aleatory and epistemic uncertainty associatedwith IR evaluation. We aim to account for both the variability associated with system performance and the amount of knowledge known about the performance estimate.