Collaborative filtering: the aim of recommender systems and the significance of user ratings

  • Authors:
  • Jennifer Redpath;David H. Glass;Sally McClean;Luke Chen

  • Affiliations:
  • School of Computing and Mathematics, University of Ulster, Newtownabbey, Co. Antrim, UK;School of Computing and Mathematics, University of Ulster, Newtownabbey, Co. Antrim, UK;School of Computing and Mathematics, University of Ulster, Newtownabbey, Co. Antrim, UK;School of Computing and Mathematics, University of Ulster, Newtownabbey, Co. Antrim, UK

  • Venue:
  • ECIR'2010 Proceedings of the 32nd European conference on Advances in Information Retrieval
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper investigates the significance of numeric user ratings in recommender systems by considering their inclusion / exclusion in both the generation and evaluation of recommendations. When standard evaluation metrics are used, experimental results show that inclusion of numeric rating values in the recommendation process does not enhance the results. However, evaluating the accuracy of a recommender algorithm requires identifying the aim of the system. Evaluation metrics such as precision and recall evaluate how well a system performs at recommending items that have been previously rated by the user. By contrast, a new metric, known as Approval Rate, is intended to evaluate how well a system performs at recommending items that would be rated highly by the user. Experimental results demonstrate that these two aims are not synonymous and that for an algorithm to attempt both obscures the investigation. The results also show that appropriate use of numeric rating valuesin the process of calculating user similarity can enhance the performance when Approval Rate is used.