Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
CLEF Experiments at Maryland: Statistical Stemming and Backoff Translation
CLEF '00 Revised Papers from the Workshop of Cross-Language Evaluation Forum on Cross-Language Information Retrieval and Evaluation
iCLEF 2001 at Maryland: Comparing Term-for-Term Gloss and MT
CLEF '01 Revised Papers from the Second Workshop of the Cross-Language Evaluation Forum on Evaluation of Cross-Language Information Retrieval Systems
Generation-heavy hybrid machine translation
Generation-heavy hybrid machine translation
Making MIRACLEs: Interactive translingual search for Cebuano and Hindi
ACM Transactions on Asian Language Information Processing (TALIP)
User-assisted query translation for interactive cross-language information retrieval
Information Processing and Management: an International Journal
Personalized web exploration with task models
Proceedings of the 17th international conference on World Wide Web
Hi-index | 0.00 |
This paper describes an experimental investigation of interactive techniques for cross-language information access. The task was to answer factual questions from a large collection of documents written in a language in which the user has little proficiency. An interactive cross-language retrieval system that included optional user-assisted query translation, display of translated summaries for individual document ranked in order of decreasing degree of match to the user's query, and optional full-text examination of individual documents was provided. Two alternative types of extractive summaries were tried using a systematically varied presentation order, one drawn from a single segment of the translated document and the other drawn from three (usually) shorter segments of the translated document. On average, users were able to correctly answer just 62% of the sixteen assigned questions in an average of 176 seconds per question. Little difference was found between the two summary types for this task in an experiment using eight human subjects. Time on task and the number of query iterations were found to exhibit a positive correlation with question difficulty.