Variations in relevance judgments and the evaluation of retrieval performance
Information Processing and Management: an International Journal
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
Investigating aboutness axioms using information fields
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
TREC-2 Proceedings of the second conference on Text retrieval conference
Evaluation of evaluation in information retrieval
SIGIR '95 Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval
Evaluating and optimizing autonomous text classification systems
SIGIR '95 Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval
Relevance judgments for assessing recall
Information Processing and Management: an International Journal
Training algorithms for linear text classifiers
SIGIR '96 Proceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval
A study of aboutness in information retrieval
Artificial Intelligence Review
Comparing Boolean and probabilistic information retrieval systems across queries and disciplines
Journal of the American Society for Information Science
Time, relevance and interaction modelling for information retrieval
Proceedings of the 20th annual international ACM SIGIR conference on Research and development in information retrieval
Journal of the American Society for Information Science - Special topic issue on the history of documentation and information science: part II
On selecting a measure of retrieval effectiveness. Part I.
Readings in information retrieval
The knowledge in multiple human relevance judgments
ACM Transactions on Information Systems (TOIS)
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
Efficient construction of large test collections
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
How reliable are the results of large-scale information retrieval experiments?
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
Variations in relevance judgments and the measurement of retrieval effectiveness
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
Measures of relative relevance and ranked half-life: performance indicators for interactive IR
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
Estimating precision by random sampling (poster abstract)
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
Practical evaluation of IR within automated classification systems
Proceedings of the eighth international conference on Information and knowledge management
When information retrieval measures agree about the relative quality of document rankings
Journal of the American Society for Information Science
Relevance and contributing information types of searched documents in task performance
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Evaluating evaluation measure stability
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
IR evaluation methods for retrieving highly relevant documents
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Building a question answering test collection
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Interactive Internet search: keyword, directory and query reformulation mechanisms compared
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
An Evaluation of Statistical Approaches to Text Categorization
Information Retrieval
Evaluation by highly relevant documents
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Information Retrieval
Modern Information Retrieval
A Task-Oriented Non-Interactive Evaluation Methodologyfor Information Retrieval Systems
Information Retrieval
Text Retrieval Systems for the Web
Programming and Computing Software
A Comparative Study on Feature Selection in Text Categorization
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Towards Functional Benchmarking of Information Retrieval Models
Proceedings of the Twelfth International Florida Artificial Intelligence Research Society Conference
Evaluating Interactive Cross-Language Information Retrieval: Document Selection
CLEF '00 Revised Papers from the Workshop of Cross-Language Evaluation Forum on Cross-Language Information Retrieval and Evaluation
Negotiating a multidimensional framework for relevance space
MIRA'99 Proceedings of the 1999 international conference on Final Mira
Text Retrieval Systems for the Web
Programming and Computing Software
Hi-index | 0.00 |
Evaluation is one of the main driving forces in studies and developments related to text retrieval. It is a basic tool for the comparison of efficiencies of alternative approaches. In this paper, the state of the art in the field of evaluation of text retrieval systems is surveyed. Two basic—system-oriented and user-oriented— paradigms, which are commonly accepted in this field, are often considered as incompatible. In this survey, both paradigms are considered in the context of a unique framework based on attributes affecting the innovation distribution and adaptation. A detailed discussion of the evaluation of text retrieval systems is based on the consideration of required components of the evaluation process for an arbitrary system. Methodological problems related to the verification of the results obtained are also discussed.