The Cranfield tests on index language devices
Readings in information retrieval
The TREC robust retrieval track
ACM SIGIR Forum
On GMAP: and other transformations
CIKM '06 Proceedings of the 15th ACM international conference on Information and knowledge management
How robust are multilingual information retrieval systems?
Proceedings of the 2008 ACM symposium on Applied computing
SemEval-2007 Task 01: Evaluating WSD on Cross-Language Information Retrieval
Advances in Multilingual and Multimodal Information Retrieval
SemEval-2007 task 17: English lexical sample, SRL and all words
SemEval '07 Proceedings of the 4th International Workshop on Semantic Evaluations
NUS-PT: exploiting parallel texts for word sense disambiguation in the English all-words tasks
SemEval '07 Proceedings of the 4th International Workshop on Semantic Evaluations
UBC-ALM: combining k-NN with SVD for WSD
SemEval '07 Proceedings of the 4th International Workshop on Semantic Evaluations
CLEF 2008: ad hoc track overview
CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access
CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
UFRGS@CLEF2009: retrieval by numbers
CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
Evaluation of axiomatic approaches to crosslanguage retrieval
CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
UNIBA-SENSE @ CLEF 2009: robust WSD task
CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
Using semantic relatedness and word sense disambiguation for (CL)IR
CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
UNIBA-SENSE @ CLEF 2009: robust WSD task
CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
Using wordnet relations and semantic classes in information retrieval tasks
CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
Document expansion based on WordNet for robust IR
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics: Posters
Query expansion based on pseudo relevance feedback from definition clusters
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics: Posters
Negation for document re-ranking in ad-hoc retrieval
ICTIR'11 Proceedings of the Third international conference on Advances in information retrieval theory
A Survey of Automatic Query Expansion in Information Retrieval
ACM Computing Surveys (CSUR)
Hi-index | 0.00 |
The Robust-WSD at CLEF 2009 aims at exploring the contribution ofWord Sense Disambiguation to monolingual and multilingual Information Retrieval. The organizers of the task provide documents and topics which have been automatically tagged with Word Senses from WordNet using several state-of-the-art Word Sense Disambiguation systems. The Robust-WSD exercise follows the same design as in 2008. It uses two languages often used in previous CLEF campaigns (English, Spanish). Documents were in English, and topics in both English and Spanish. The document collections are based on the widely used LA94 and GH95 news collections. All instructions and datasets required to replicate the experiment are available from the organizers website (http://ixa2.si.ehu.es/clirwsd/). The results show that some top-scoring systems improve their IR and CLIR results with the use of WSD tags, but the best scoring runs do not use WSD.