Evaluating Natural Language Processing Systems: An Analysis and Review
Evaluating Natural Language Processing Systems: An Analysis and Review
TREC: Experiment and Evaluation in Information Retrieval (Digital Libraries and Electronic Publishing)
Human performance measures for video retrieval
MIR '06 Proceedings of the 8th ACM international workshop on Multimedia information retrieval
In memoriam: Karen Spärck Jones
Journal of Information Science
Structured Information Retrieval and Quantum Theory
QI '09 Proceedings of the 3rd International Symposium on Quantum Interaction
A Quantum-Based Model for Interactive Information Retrieval
ICTIR '09 Proceedings of the 2nd International Conference on Theory of Information Retrieval: Advances in Information Retrieval Theory
Report on the SIGIR 2009 workshop on the future of IR evaluation
ACM SIGIR Forum
ECDL'09 Proceedings of the 13th European conference on Research and advanced technology for digital libraries
A cross-domain analysis of task and genre effects on perceptions of usefulness
Information Processing and Management: an International Journal
Hi-index | 0.00 |
Work within the TREC Programme has concentrated on generalising, not particularising. Now is the time to think about particularising, that is, to address not further generalisation across information-seeking contexts but context-driven particularisation. This note develops this argument from an analysis of TREC work, applying notions taken from discussions of evaluation for language and information processing in general.