The use of phrases and structured queries in information retrieval
SIGIR '91 Proceedings of the 14th annual international ACM SIGIR conference on Research and development in information retrieval
Using graded relevance assessments in IR evaluation
Journal of the American Society for Information Science and Technology
Focused Access to XML Documents
Term proximity scoring for keyword-based retrieval systems
ECIR'03 Proceedings of the 25th European conference on IR research
Analysis of the INEX 2009 ad hoc track results
INEX'09 Proceedings of the Focused retrieval and evaluation, and 8th international conference on Initiative for the evaluation of XML retrieval
INEX'09 Proceedings of the Focused retrieval and evaluation, and 8th international conference on Initiative for the evaluation of XML retrieval
Domain-specific information retrieval using rcommenders
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
ListOPT: learning to optimize for XML ranking
PAKDD'11 Proceedings of the 15th Pacific-Asia conference on Advances in knowledge discovery and data mining - Volume Part II
Enhanced information retrieval using domain-specific recommender models
ICTIR'11 Proceedings of the Third international conference on Advances in information retrieval theory
Combining strategies for XML retrieval
INEX'10 Proceedings of the 9th international conference on Initiative for the evaluation of XML retrieval: comparative evaluation of focused retrieval
XML retrieval more efficient using double scoring scheme
INEX'10 Proceedings of the 9th international conference on Initiative for the evaluation of XML retrieval: comparative evaluation of focused retrieval
XML information retrieval through tree edit distance and structural summaries
AIRS'11 Proceedings of the 7th Asia conference on Information Retrieval Technology
ACM SIGIR Forum
Retrieval evaluation on focused tasks
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Kinship contextualization: utilizing the preceding and following structural elements
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Reading contexts for structured documents retrieval
Proceedings of the 10th Conference on Open Research Areas in Information Retrieval
Selection fusion in semi-structured retrieval
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Position-based contextualization for passage retrieval
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
An evaluation framework for cross-lingual link discovery
Information Processing and Management: an International Journal
Hi-index | 0.00 |
This paper gives an overview of the INEX 2009 Ad Hoc Track. The main goals of the Ad Hoc Track were three-fold. The first goal was to investigate the impact of the collection scale and markup, by using a new collection that is again based on a the Wikipedia but is over 4 times larger, with longer articles and additional semantic annotations. For this reason the Ad Hoc track tasks stayed unchanged, and the Thorough Task of INEX 2002-2006 returns. The second goal was to study the impact of more verbose queries on retrieval effectiveness, by using the available markup as structural constraints--now using both the Wikipedia's layout-based markup, as well as the enriched semantic markup--and by the use of phrases. The third goal was to compare different result granularities by allowing systems to retrieve XML elements, ranges of XML elements, or arbitrary passages of text. This investigates the value of the internal document structure (as provided by the XML mark-up) for retrieving relevant information. The INEX 2009 Ad Hoc Track featured four tasks: For the Thorough Task a ranked-list of results (elements or passages) by estimated relevance was needed. For the Focused Task a ranked-list of non-overlapping results (elements or passages) was needed. For the Relevant in Context Task non-overlapping results (elements or passages) were returned grouped by the article from which they came. For the Best in Context Task a single starting point (element start tag or passage start) for each article was needed. We discuss the setup of the track, and the results for the four tasks.