Another look at automatic text-retrieval systems
Communications of the ACM
The significance of the Cranfield tests on index languages
SIGIR '91 Proceedings of the 14th annual international ACM SIGIR conference on Research and development in information retrieval
On the evaluation of IR systems
Information Processing and Management: an International Journal - Special issue on evaluation issues in information retrieval
The physical and cognitive paradigms in information retrieval research
Journal of Documentation
A task-oriented approach to information retrieval evaluation
Journal of the American Society for Information Science - Special issue: evaluation of information retrieval systems
Natural language information retrieval: progress report
Information Processing and Management: an International Journal - The sixth text REtrieval conference (TREC-6)
Information Retrieval
Automatic language and information processing: rethinking evaluation
Natural Language Engineering
Termight: identifying and translating technical terminology
ANLC '94 Proceedings of the fourth conference on Applied natural language processing
Toward a task-based gold standard for evaluation of NP chunks and technical terms
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
User preference: A measure of query-term quality
Journal of the American Society for Information Science and Technology
A model for quantitative evaluation of an end-to-end question-answering system
Journal of the American Society for Information Science and Technology
Task Effects on Interactive Search: The Query Factor
Focused Access to XML Documents
How should users access the content of digital books?
Proceedings of the 2008 ACM workshop on Research advances in large digital book repositories
On the potential search effectiveness of MeSH (medical subject headings) terms
Proceedings of the third symposium on Information interaction in context
Hi-index | 0.00 |
This study addresses the question of whether the way in which sets of query terms are identified has an impact on the effectiveness of users' information seeking efforts. Query terms are text strings used as input to an information access system; they are products of a method or grammar that identifies a set of query terms. We conducted an experiment that compared the effectiveness of sets of query terms identified for a single book by three different methods. One had been previously prepared by a human indexer for a back-of-the-book index. The other two were identified by computer programs that used a combination of linguistic and statistical criteria to extract terms from full text. Effectiveness was measured by (1) whether selected query terms led participants to correct answers and (2) how long it took participants to obtain correct answers. Our results show that two sets of terms - the human terms and the set selected according to the linguistically more sophisticated criteria - were significantly more effective than the third set of terms. This single case demonstrates that query languages do have a measurable impact on the effectiveness of query term languages in the interactive information access process. The procedure described in this paper can be used to assess the effectiveness for information seekers of query terms identified by any query language.