Personal and Ubiquitous Computing
The Philosophy of Information Retrieval Evaluation
CLEF '01 Revised Papers from the Second Workshop of the Cross-Language Evaluation Forum on Evaluation of Cross-Language Information Retrieval Systems
A Survey of Context-Aware Mobile Computing Research
A Survey of Context-Aware Mobile Computing Research
Just-in-time information retrieval
Just-in-time information retrieval
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
A survey on context-aware systems
International Journal of Ad Hoc and Ubiquitous Computing
A diary study of mobile information needs
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Understanding context before using it
CONTEXT'05 Proceedings of the 5th international conference on Modeling and Using Context
Collaborative annotation for context-aware retrieval
Proceedings of the WSDM '09 Workshop on Exploiting Semantic Annotations in Information Retrieval
Evaluating Mobile Proactive Context-Aware Retrieval: An Incremental Benchmark
ICTIR '09 Proceedings of the 2nd International Conference on Theory of Information Retrieval: Advances in Information Retrieval Theory
iSchool agenda: mobile context research and teaching
Proceedings of the 2011 iConference
Emotion aware mobile application
ICCCI'10 Proceedings of the Second international conference on Computational collective intelligence: technologies and applications - Volume Part II
A social approach to context-aware retrieval
World Wide Web
Context-aware retrieval going social
FDIA'09 Proceedings of the Third BCS-IRSG conference on Future Directions in Information Access
Hi-index | 0.00 |
This paper discusses the issue of evaluation of context-aware retrieval applications. We begin by describing MoBe, a specific architecture that allows automatic download and execution of context-aware applications on mobile devices. In MoBe, the most relevant applications are selected by matching context and application descriptors. Since several alternatives for descriptors implementation exist, it is important to compare their effectiveness. To this aim, we develop a TREC-like benchmark, in which the collection is made up by a set of application descriptors and the topics are made up by context descriptors. We then use the benchmark to evaluate the effectiveness of different descriptors components, and of structured and unstructured (i.e., free text) data, finding results that are somehow useful for future MoBe development. We also discuss the issue of the evaluation methodology for highly interactive and novel applications like context-aware retrieval systems, and MoBe in particular.