Workshop on recommendation utility evaluation: beyond RMSE -- RUE 2012

  • Authors:
  • Xavier Amatriain;Pablo Castells;Arjen de Vries;Christian Posse

  • Affiliations:
  • Netflix, Los Gatos, California, Spain;Universidad Autónoma de Madrid, Madrid, Spain;Centrum Wiskunde & Informatica, Amsterdam, Netherlands;Linkedin, Mountain View, California, USA

  • Venue:
  • Proceedings of the sixth ACM conference on Recommender systems
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Measuring the error in rating prediction has been by far the dominant evaluation methodology in the Recommender Systems literature. Yet there seems to be a general consensus that this criterion alone is far from being enough to assess the practical effectiveness of a recommender system. Information Retrieval metrics have started to be used to evaluate item selection and ranking rather than rating prediction, but considerable divergence remains in the adoption of such metrics by different authors. On the other hand, recommendation utility includes other key dimensions and concerns beyond accuracy, such as novelty and diversity, user engagement, and business performance. While the need for further extension, formalization, clarification and standardization of evaluation methodologies is recognized in the community, this need is still unmet for a large extent. The RUE 2012 workshop sought to identify and better understand the current gaps in recommender system evaluation methodologies, help lay directions for progress in addressing them, and contribute to the consolidation and convergence of experimental methods and practice.