ACM SIGIR Forum
Information retrieval: data structures and algorithms
Information retrieval: data structures and algorithms
Method for evaluation of stemming algorithms based on error counting
Journal of the American Society for Information Science
Information Storage and Retrieval Systems: Theory and Implementation
Information Storage and Retrieval Systems: Theory and Implementation
Introduction to Modern Information Retrieval
Introduction to Modern Information Retrieval
Automatic Language-Specific Stemming in Information Retrieval
CLEF '00 Revised Papers from the Workshop of Cross-Language Evaluation Forum on Cross-Language Information Retrieval and Evaluation
Hi-index | 0.00 |
Until the introduction of the method for evaluation of stemming algorithms based on error counting, the effectiveness of these algorithms was compared by determining their retrieval performance for various experimental test collections. With this method, the performance of a stemmer is computed by counting the number of identifiable errors during the stemming of words from various text samples, thus making the evaluation independent of Information Retrieval. In order to implement the method it is necessary to group manually the words in each sample into disjoint sets of words holding the same semantic concept. One single word can belong to only one concept. In order to do this grouping automatically, in the present work this constraint has been generalized, allowing one word to belong to several different concepts. Results with the generalized method confirm those obtained by the non-generalized method, but show considerable less differences between three affix removal stemmers. For first time evaluated four letter successor variety stemmers, these appear to be slightly inferior with respect to the other three in terms of general accuracy (ERRT, error rate relative to truncation), but they are weight adjustable and, most important, need no linguistic knowledge about the language they are applied to.