Evaluating evaluation measure stability
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Sparse Distributed Memory
SemEval-2007 task 01: evaluating WSD on cross-language information retrieval
SemEval '07 Proceedings of the 4th International Workshop on Semantic Evaluations
NUS-PT: exploiting parallel texts for word sense disambiguation in the English all-words tasks
SemEval '07 Proceedings of the 4th International Workshop on Semantic Evaluations
UBC-ALM: combining k-NN with SVD for WSD
SemEval '07 Proceedings of the 4th International Workshop on Semantic Evaluations
Measuring the semantic similarity of texts
EMSEE '05 Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment
CLEF 2008: ad hoc track overview
CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access
Joint image and word sense discrimination for image retrieval
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part I
Hi-index | 0.00 |
Polysemous words have more than one possible meaning, thus word ambiguity is a key issue for the systems which access textual information. Computational linguistics proposes two main methods to cope with word ambiguity: sense disambiguation and sense discrimination. (Word) Sense Disambiguation is the task of selecting a sense for a word from a set of predefined possibilities, while (Word) Sense Discrimination is the task of dividing the usages of a word into different meanings, discriminating among word meanings based on information found in unannotated corpora. This paper proposes a strategy to compare disambiguation and discrimination systems by adopting an "in vivo" evaluation in an Information Retrieval scenario. The goal of the evaluation is to establish how disambiguation and discrimination bias the retrieval performance.