Computer Evaluation of Indexing and Text Processing
Journal of the ACM (JACM)
Random and best-first document selection models
SIGIR '87 Proceedings of the 10th annual international ACM SIGIR conference on Research and development in information retrieval
The use of normal multiplication tables for information storage and retrieval
Communications of the ACM
Methods for the administration of textual data in database systems
SIGIR '80 Proceedings of the 3rd annual ACM conference on Research and development in information retrieval
Information Retrieval with Distributed Databases: Analytic Models of Performance
IEEE Transactions on Parallel and Distributed Systems
Topic-based mixture language modelling
Natural Language Engineering
WEIRD: an approach to concept-based information retrieval
ACM SIGIR Forum
Hi-index | 48.23 |
The performance of information retrieval systems can be evaluated in a number of different ways. Much of the published evaluation work is based on measuring the retrieval performance of an average user query. Unfortunately, formal proofs are difficult to construct for the average case. In the present study, retrieval evaluation is based on optimizing the performance of a specific user query. The concept of query term accuracy is introduced as the probability of occurrence of a query term in the documents relevant to that query. By relating term accuracy to the frequency of occurrence of the term in the documents of a collection it is possible to give formal proofs of the effectiveness with respect to a given user query of a number of automatic indexing systems that have been used successfully in experimental situations. Among these are inverse document frequency weighting, thesaurus construction, and phrase generation.