Some(what) grand challenges for information retrieval
ACM SIGIR Forum
Workshop and challenge on news recommender systems
Proceedings of the 7th ACM conference on Recommender systems
A month in the life of a production news recommender system
Proceedings of the 2013 workshop on Living labs for information retrieval evaluation
Hi-index | 0.00 |
In recent years, immense progress has been made in the development of recommendation, retrieval, and personalisation techniques. The evaluation of these systems is still based on traditional information retrieval and statistics metrics, e.g., precision, recall and/or RMSE, often not taking the use-case and situation of the actual system into consideration. However, the rapid evolution of recommender and adaptive IR systems in both their goals and their bapplication domains foster the need for new evaluation methodologies and environments. In the Workshop on Benchmarking Adaptive Retrieval and Recommender Systems, we aimed to provide a platform for discussions on novel evaluation and benchmarking approaches.