Evaluating collaborative filtering recommender systems
ACM Transactions on Information Systems (TOIS)
Being accurate is not enough: how accuracy metrics have hurt recommender systems
CHI '06 Extended Abstracts on Human Factors in Computing Systems
A user-centric evaluation framework for recommender systems
Proceedings of the fifth ACM conference on Recommender systems
Online controlled experiments: introduction, learnings, and humbling statistics
Proceedings of the sixth ACM conference on Recommender systems
Building industrial-scale real-world recommender systems
Proceedings of the sixth ACM conference on Recommender systems
Proceedings of the 2013 International News Recommender Systems Workshop and Challenge
Do recommendations matter?: news recommendation in real life
Proceedings of the companion publication of the 17th ACM conference on Computer supported cooperative work & social computing
Hi-index | 0.00 |
During the last decade, recommender systems have become a ubiquitous feature in the online world. Research on systems and algorithms in this area has flourished, leading to novel techniques for personalization and recommendation. The evaluation of recommender systems, however, has not seen similar progress---techniques have changed little since the advent of recommender systems, when evaluation methodologies were "borrowed" from related research areas. As an effort to move evaluation methodology forward, this paper describes a production recommender system infrastructure that allows research systems to be evaluated in situ, by real-world metrics such as user clickthrough. We present an analysis of one month of interactions with this infrastructure and share our findings.