Information Retrieval
Modern Information Retrieval
Using graded relevance assessments in IR evaluation
Journal of the American Society for Information Science and Technology
ACM SIGIR Forum
Evaluating relevant in context: document retrieval with a twist
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
INEX'05 Proceedings of the 4th international conference on Initiative for the Evaluation of XML Retrieval
HiXEval: highlighting XML retrieval evaluation
INEX'05 Proceedings of the 4th international conference on Initiative for the Evaluation of XML Retrieval
INEX'04 Proceedings of the Third international conference on Initiative for the Evaluation of XML Retrieval
Multimedia retrieval at INEX 2007
ACM SIGIR Forum
UJM at INEX 2007: Document Model Integrating XML Tags
Focused Access to XML Documents
The INEX 2007 Multimedia Track
Focused Access to XML Documents
Content-Oriented Relevance Feedback in XML-IR Using the Garnata Information Retrieval System
FQAS '09 Proceedings of the 8th International Conference on Flexible Query Answering Systems
Overview of the INEX 2009 ad hoc track
INEX'09 Proceedings of the Focused retrieval and evaluation, and 8th international conference on Initiative for the evaluation of XML retrieval
INEX'09 Proceedings of the Focused retrieval and evaluation, and 8th international conference on Initiative for the evaluation of XML retrieval
Overview of the INEX 2009 efficiency track
INEX'09 Proceedings of the Focused retrieval and evaluation, and 8th international conference on Initiative for the evaluation of XML retrieval
INEX'09 Proceedings of the Focused retrieval and evaluation, and 8th international conference on Initiative for the evaluation of XML retrieval
Evaluation effort, reliability and reusability in XML retrieval
Journal of the American Society for Information Science and Technology
ACM SIGIR Forum
Overview of the INEX 2010 ad hoc track
INEX'10 Proceedings of the 9th international conference on Initiative for the evaluation of XML retrieval: comparative evaluation of focused retrieval
The potential benefit of focused retrieval in relevant-in-context task
INEX'10 Proceedings of the 9th international conference on Initiative for the evaluation of XML retrieval: comparative evaluation of focused retrieval
New metrics for meaningful evaluation of informally structured speech retrieval
ECIR'12 Proceedings of the 34th European conference on Advances in Information Retrieval
Model for simulating result document browsing in focused retrieval
Proceedings of the 4th Information Interaction in Context Symposium
Penalty functions for evaluation measures of unsegmented speech retrieval
CLEF'12 Proceedings of the Third international conference on Information Access Evaluation: multilinguality, multimodality, and visual analytics
DTD based costs for tree-edit distance in structured information retrieval
ECIR'13 Proceedings of the 35th European conference on Advances in Information Retrieval
Proceedings of the 28th Annual ACM Symposium on Applied Computing
Estimating structural relevance of XML elements through language model
Proceedings of the 10th Conference on Open Research Areas in Information Retrieval
ACM SIGIR Forum
Hi-index | 0.00 |
This paper describes the official measures of retrieval effectiveness that are employed for the Ad Hoc Track at INEX 2007. Whereas in earlier years all, but only, XML elements could be retrieved, the result format has been liberalized to arbitrary passages. In response, the INEX 2007 measures are based on the amount of highlighted text retrieved, leading to natural extensions of the well-established measures of precision and recall. The following measures are defined: The Focused Task is evaluated by interpolated precision at 1% recall (iP[0.01]) in terms of the highlighted text retrieved. The Relevant in Context Task is evaluated by mean average generalized precision (MAgP) where the generalized score per article is based on the retrieved highlighted text. The Best in Context Task is also evaluated by mean average generalized precision (MAgP) but here the generalized score per article is based on the distance to the assessor's best-entry point.