Using graded relevance assessments in IR evaluation
Journal of the American Society for Information Science and Technology
ACM SIGIR Forum
Articulating information needs in XML query languages
ACM Transactions on Information Systems (TOIS)
Parameter estimation for a simple hierarchical generative model for XML retrieval
INEX'05 Proceedings of the 4th international conference on Initiative for the Evaluation of XML Retrieval
GPX: gardens point XML IR at INEX 2005
INEX'05 Proceedings of the 4th international conference on Initiative for the Evaluation of XML Retrieval
Multimedia retrieval at INEX 2007
ACM SIGIR Forum
Overview of the INEX 2007 Book Search Track (BookSearch'07)
Focused Access to XML Documents
The INEX 2007 Multimedia Track
Focused Access to XML Documents
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
University of waterloo at INEX 2009: ad hoc, book, entity ranking, and link-the-wiki tracks
INEX'09 Proceedings of the Focused retrieval and evaluation, and 8th international conference on Initiative for the evaluation of XML retrieval
Evaluation effort, reliability and reusability in XML retrieval
Journal of the American Society for Information Science and Technology
Are semantically related links more effective for retrieval?
ECIR'11 Proceedings of the 33rd European conference on Advances in information retrieval
Crowdsourcing assessments for XML ranked retrieval
ECIR'2010 Proceedings of the 32nd European conference on Advances in Information Retrieval
Hi-index | 0.00 |
This paper gives an overview of the INEX 2007 Ad Hoc Track. The main purpose of the Ad Hoc Track was to investigate the value of the internal document structure (as provided by the XML mark-up) for retrieving relevant information. For this reason, the retrieval results were liberalized to arbitrary passages and measures were chosen to fairly compare systems retrieving elements, ranges of elements, and arbitrary passages. The INEX 2007 Ad Hoc Track featured three tasks: For the Focused Taska ranked-list of non-overlapping results (elements or passages) was needed. For the Relevant in Context Tasknon-overlapping results (elements or passages) were returned grouped by the article from which they came. For the Best in Context Taska single starting point (element start tag or passage start) for each article was needed. We discuss the results for the three tasks, examine the relative effectiveness of element and passage retrieval. This is examined in the context of content only (CO, or Keyword) search as well as content and structure (CAS, or structured) search.