Lexical analysis and stoplists
Information retrieval
Text-based intelligent systems
SIGMETRICS '97 Proceedings of the 1997 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
A vector space model for automatic indexing
Communications of the ACM
Information Retrieval
Modern Information Retrieval
Multi-Dimensional Evaluation of Information Retrieval Results
WI '04 Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence
Hi-index | 0.00 |
Even though information retrieval systems have been successfully deployed for over 45 years, the field continues to evolve in performance, functionality, and accuracy. There are hundreds of different products available with different indexing and retrieval characteristics. How does one choose the appropriate system for a given application? The first step in that choice is the creation of a framework for comparison of IR products and an infrastructure that supports automated execution and analysis of testing results. The next step is providing an environment for subjective measurement using human evaluators. In this paper we briefly introduce the concepts used in IR system evaluation and report on our initial implementation of a framework for evaluating indexing performance. We also report a test case, which provides a comparative analysis of the indexing characteristics for three IR system implementations using a common collection of documents.