QUILT: implementing a large-scale cross-language text retrieval system
Proceedings of the 20th annual international ACM SIGIR conference on Research and development in information retrieval
Journal of the American Society for Information Science
Support for interactive document selection in cross-language information retrieval
Information Processing and Management: an International Journal - Special issue on progress toward digital libraries
Do batch and user evaluations give the same results?
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Information Retrieval
Predicting What MT is Good for: User Judgements and Task Performance
AMTA '98 Proceedings of the Third Conference of the Association for Machine Translation in the Americas on Machine Translation and the Information Soup
Evaluation of Text Retrieval Systems
Programming and Computing Software
Technical issues of cross-language information retrieval: a review
Information Processing and Management: an International Journal - Special issue: Cross-language information retrieval
Categorization-driven cross-language retrieval of medical information
Journal of the American Society for Information Science and Technology
Methods for Evaluating Interactive Information Retrieval Systems with Users
Foundations and Trends in Information Retrieval
Semantic search of tools for collaborative learning with the Ontoolsearch system
Computers & Education
Hi-index | 0.00 |
The problem of finding documents that are written in a language that the searcher cannot read is perhaps the most challenging application of Cross-Language Information Retrieval (CLIR) technology. The first Cross-Language Evaluation Forum (CLEF) provided an excellent venue for assessing the performance of automated CLIR techniques, but little is known about how searchers and systems might interact to achieve better cross-language search results than automated systems alone can provide. This paper explores the question of how interactive approaches to CLIR might be evaluated, suggesting an initial focus on evaluation of interactive document selection. Important evaluation issues are identified, the structure of an interactive CLEF evaluation is proposed, and the key research communities that could be brought together by such an evaluation are introduced.