Utilizing various sparsity measures for enhancing accuracy of collaborative recommender systems based on local and global similarities

  • Authors:
  • Deepa Anand;Kamal K. Bharadwaj

  • Affiliations:
  • School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi 110 067, India;School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi 110 067, India

  • Venue:
  • Expert Systems with Applications: An International Journal
  • Year:
  • 2011

Quantified Score

Hi-index 12.05

Visualization

Abstract

Collaborative filtering is a popular recommendation technique, which suggests items to users by exploiting past user-item interactions involving affinities between pairs of users or items. In spite of their huge success they suffer from a range of problems, the most fundamental being that of data sparsity. When the rating matrix is sparse, local similarity measures yield a poor neighborhood set thus affecting the recommendation quality. In such cases global similarity measures can be used to enrich the neighborhood set by considering transitive relationships among users even in the absence of any common experiences. In this work we propose a recommender system framework utilizing both local and global similarities, taking into account not only the overall sparsity in the rating data, but also sparsity at the user-item level. Several schemes are proposed, based on various sparsity measures pertaining to the active user, for the estimation of the parameter @a, that allows the variation of the importance given to the global user similarity with regards to local user similarity. Furthermore, we propose an automatic scheme for weighting the various sparsity measures, through evolutionary approach, to obtain a unified measure of sparsity (UMS). In order to take maximum possible advantage of the various sparsity measures relating to an active user, a scheme based on the UMS is suggested for estimating @a. Experimental results demonstrate that the proposed estimates of @a, markedly, outperform the schemes for which @a is kept constant across all predictions (fixed-@a schemes), on accuracy of predicted ratings.