European Research Letter: cross-language system evaluation: the CLEF campaigns
Journal of the American Society for Information Science and Technology
TREC: Experiment and Evaluation in Information Retrieval (Digital Libraries and Electronic Publishing)
DBpedia: a nucleus for a web of open data
ISWC'07/ASWC'07 Proceedings of the 6th international The semantic web and 2nd Asian conference on Asian semantic web conference
GeoCLEF: the CLEF 2005 cross-language geographic information retrieval track overview
CLEF'05 Proceedings of the 6th international conference on Cross-Language Evalution Forum: accessing Multilingual Information Repositories
Every document has a geographical scope
Data & Knowledge Engineering
Hi-index | 0.00 |
In GIR, as a research field branching from IR, initial evaluation initiatives were unsurprisingly inspired from established IR evaluation models. The GeoCLEF evaluation [6], which I will look into in some detail below, follows such a model. While this model suits the goal of a generic evaluation for all types of GIR systems, other evaluation initiatives forked so that particularly geographic challenges of the task could be more thoroughly focused on. NTCIR [7], with its GeoTime task, looked to include time expressions and had a topic set that resembles more questions that stir up the focus from retrieval, and more on reasoning.